From patchwork Thu Mar 10 16:45:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FA69C433F5 for ; Thu, 10 Mar 2022 16:45:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243350AbiCJQqy (ORCPT ); Thu, 10 Mar 2022 11:46:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241095AbiCJQqt (ORCPT ); Thu, 10 Mar 2022 11:46:49 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D57F15F099 for ; Thu, 10 Mar 2022 08:45:48 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id o133-20020a25738b000000b0062872621d0eso4897409ybc.2 for ; Thu, 10 Mar 2022 08:45:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3kP5r5OVryNKXjUYWSAGxldjSwy2u45gORSbNRbqjK4=; b=N7mCXve5ChV6kK9nsxwOshVGth/7qmbMf51kU6YkpgF8R6ESL8tJepyaXcmo6v9Kuk xAO0vPhVtigqvjdK4U6NnOi0n/wTieAJu1R5iAjI/zpHkK9IM52wZ6PniIepdhd0y4uK SXELfzw82lHJe9m9lgBdEL3wrl8pl5nw59x1Pp+loeSS4xtuBCj+BHZoy7BNEYCPdmq3 Oz36xccn/KhThAZBe3+sPV435UdfCcRlXl5NckwOdDv2Oex/19bV2jcFe8UDWnwhVKLJ 625IN7/2hdifRgVYO3JYn9tfHaG7Z9gn/S+TvgcAr4Hys/mb7oOZ8109dSk4J/qH8ZNN pXUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3kP5r5OVryNKXjUYWSAGxldjSwy2u45gORSbNRbqjK4=; b=rQQInlvQs0VF36iV4TMJAo5ftTm/7RHHE5PU43hNNKzUToPx14+ZeG0mKmpUeAy1gQ kDF4qjK+Njzoq2CNZaO8m9cM1VieWSiSK93de2l3E7u7Ci1G+q5WgwIKfVn/IE85E4Tf AcnsSYcrthHqwR0qvTCgjVFUOzDs7+jk/wtkCD/8+sqFNzC9AQvF9MpvCjv219d7/R4u 36yPS53KZVlAZoYk5U1fOzY/ywb/w+MO851SBaHBjoTjNEpyCqkHX4s5kdK9tMZqDu5K GVexP0k5uk8wKeMg6RsnQ1pH9OCBLmZhvwkWq9+iock+npShwGciIgYf3awlkLUmD1xH +Q1Q== X-Gm-Message-State: AOAM531jeHOgRYP62YBs3n0GcYfydYnv8peY+AsS6ffypZ4gC8nubekU CrdsZUowdh1KABXdYJoQE7bpoFdsaUjN X-Google-Smtp-Source: ABdhPJwSnKuuWl+kD5Vu+YPIM9Bgb0tOm35IFF8rcpCfnffcjOmy2dl1mrauLWzhsbaj88i3a4mOQ4rXncik X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a05:6902:533:b0:629:52d7:e4ae with SMTP id y19-20020a056902053300b0062952d7e4aemr4515236ybs.601.1646930747605; Thu, 10 Mar 2022 08:45:47 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:20 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-2-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 01/13] selftests: KVM: Dump VM stats in binary stats test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add kvm_util library functions to read KVM stats through the binary stats interface and then dump them to stdout when running the binary stats test. Subsequent commits will extend the kvm_util code and use it to make assertions in a test for NX hugepages. CC: Jing Zhang Signed-off-by: Ben Gardon --- .../selftests/kvm/include/kvm_util_base.h | 1 + .../selftests/kvm/kvm_binary_stats_test.c | 3 + tools/testing/selftests/kvm/lib/kvm_util.c | 143 ++++++++++++++++++ 3 files changed, 147 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 92cef0ffb19e..c5f4a67772cb 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -400,6 +400,7 @@ void assert_on_unhandled_exception(struct kvm_vm *vm, uint32_t vcpuid); int vm_get_stats_fd(struct kvm_vm *vm); int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid); +void dump_vm_stats(struct kvm_vm *vm); uint32_t guest_get_vcpuid(void); diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c index 17f65d514915..afc4701ce8dd 100644 --- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c +++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c @@ -174,6 +174,9 @@ static void vm_stats_test(struct kvm_vm *vm) stats_test(stats_fd); close(stats_fd); TEST_ASSERT(fcntl(stats_fd, F_GETFD) == -1, "Stats fd not freed"); + + /* Dump VM stats */ + dump_vm_stats(vm); } static void vcpu_stats_test(struct kvm_vm *vm, int vcpu_id) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1665a220abcb..4d21c3b46780 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2556,3 +2556,146 @@ int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid) return ioctl(vcpu->fd, KVM_GET_STATS_FD, NULL); } + +/* Caller is responsible for freeing the returned kvm_stats_header. */ +static struct kvm_stats_header *read_vm_stats_header(int stats_fd) +{ + struct kvm_stats_header *header; + ssize_t ret; + + /* Read kvm stats header */ + header = malloc(sizeof(*header)); + TEST_ASSERT(header, "Allocate memory for stats header"); + + ret = read(stats_fd, header, sizeof(*header)); + TEST_ASSERT(ret == sizeof(*header), "Read stats header"); + + return header; +} + +static void dump_header(int stats_fd, struct kvm_stats_header *header) +{ + ssize_t ret; + char *id; + + printf("flags: %u\n", header->flags); + printf("name size: %u\n", header->name_size); + printf("num_desc: %u\n", header->num_desc); + printf("id_offset: %u\n", header->id_offset); + printf("desc_offset: %u\n", header->desc_offset); + printf("data_offset: %u\n", header->data_offset); + + /* Read kvm stats id string */ + id = malloc(header->name_size); + TEST_ASSERT(id, "Allocate memory for id string"); + ret = pread(stats_fd, id, header->name_size, header->id_offset); + TEST_ASSERT(ret == header->name_size, "Read id string"); + + printf("id: %s\n", id); + + free(id); +} + +static ssize_t stats_desc_size(struct kvm_stats_header *header) +{ + return sizeof(struct kvm_stats_desc) + header->name_size; +} + +/* Caller is responsible for freeing the returned kvm_stats_desc. */ +static struct kvm_stats_desc *read_vm_stats_desc(int stats_fd, + struct kvm_stats_header *header) +{ + struct kvm_stats_desc *stats_desc; + size_t size_desc; + ssize_t ret; + + size_desc = header->num_desc * stats_desc_size(header); + + /* Allocate memory for stats descriptors */ + stats_desc = malloc(size_desc); + TEST_ASSERT(stats_desc, "Allocate memory for stats descriptors"); + + /* Read kvm stats descriptors */ + ret = pread(stats_fd, stats_desc, size_desc, header->desc_offset); + TEST_ASSERT(ret == size_desc, "Read KVM stats descriptors"); + + return stats_desc; +} + +/* Caller is responsible for freeing the memory *data. */ +static int read_stat_data(int stats_fd, struct kvm_stats_header *header, + struct kvm_stats_desc *desc, uint64_t **data) +{ + u64 *stats_data; + ssize_t ret; + + stats_data = malloc(desc->size * sizeof(*stats_data)); + + ret = pread(stats_fd, stats_data, desc->size * sizeof(*stats_data), + header->data_offset + desc->offset); + + /* ret is in bytes. */ + ret = ret / sizeof(*stats_data); + + TEST_ASSERT(ret == desc->size, + "Read data of KVM stats: %s", desc->name); + + *data = stats_data; + + return ret; +} + +static void dump_stat(int stats_fd, struct kvm_stats_header *header, + struct kvm_stats_desc *desc) +{ + u64 *stats_data; + ssize_t ret; + int i; + + printf("\tflags: %u\n", desc->flags); + printf("\texponent: %u\n", desc->exponent); + printf("\tsize: %u\n", desc->size); + printf("\toffset: %u\n", desc->offset); + printf("\tbucket_size: %u\n", desc->bucket_size); + printf("\tname: %s\n", (char *)&desc->name); + + ret = read_stat_data(stats_fd, header, desc, &stats_data); + + printf("\tdata: %lu", *stats_data); + for (i = 1; i < ret; i++) + printf(", %lu", *(stats_data + i)); + printf("\n\n"); + + free(stats_data); +} + +void dump_vm_stats(struct kvm_vm *vm) +{ + struct kvm_stats_desc *stats_desc; + struct kvm_stats_header *header; + struct kvm_stats_desc *desc; + size_t size_desc; + int stats_fd; + int i; + + stats_fd = vm_get_stats_fd(vm); + + header = read_vm_stats_header(stats_fd); + dump_header(stats_fd, header); + + stats_desc = read_vm_stats_desc(stats_fd, header); + + size_desc = stats_desc_size(header); + + /* Read kvm stats data one by one */ + for (i = 0; i < header->num_desc; ++i) { + desc = (void *)stats_desc + (i * size_desc); + dump_stat(stats_fd, header, desc); + } + + free(stats_desc); + free(header); + + close(stats_fd); +} + From patchwork Thu Mar 10 16:45:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E234C433FE for ; Thu, 10 Mar 2022 16:45:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243509AbiCJQqz (ORCPT ); Thu, 10 Mar 2022 11:46:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243089AbiCJQqx (ORCPT ); Thu, 10 Mar 2022 11:46:53 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A05315FC99 for ; Thu, 10 Mar 2022 08:45:51 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id c70-20020a624e49000000b004f69bac03d0so3596737pfb.13 for ; Thu, 10 Mar 2022 08:45:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gPwG39gBAUjKtpb2sx5C4T1tsCVeZ7jn+l4C/lt8u+0=; b=FOOfmLQ0SfEB+1d0/7eVQI9GNQ5Hpb5D1dfnEUhLlO3eyswC3yFNrbrfft+TSB/TVt FyFTD1TNgHH9i5vsdNzHCRSMkbE+ei471JEthPSYWqAF2VkVCabSEovcvUROpLDTan26 DCX97C2bipVAcqij0csXVdincPZZBfnqraghTHlQihBrSGQzaHUH1S3vNV/H0PmbsvYQ xPUY3FF4bf4ot+CPRBZGa6lmzuswtyGehDx/AR0/Lfb3iIndJ7XWGYbvOuG55/tk8Sm1 /hfb3OrXQ62TYF3GUBCySrFYOSHtyxvJSgacZOFNM1XoGnvB0pcmxmEa7+mD5VsvCU6v +I0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gPwG39gBAUjKtpb2sx5C4T1tsCVeZ7jn+l4C/lt8u+0=; b=a4WhgctK8ZuwqZ+6bTmAggXLS9jkfHDUkcuHTllxHkQhfYSLr+vmrY+s11SiLG9GLE E1A+AqqXEXfLHlboybG96oVFtGF/bnHmYx0rMgmp1uIVe9CpyPepHAdQ+Ob4tNIiNGhl tcd8iY2NVo2zKYAWPlo0tf3SQy/qyTwmEDrEP/UdjhkxtgKbMzBYNdNO5gOYVzvgQILH JrJf0WHONKTBRI0vvjMNVXs482TSiADdhw0Z64JX4hW7oYvOsb0S22N/Nur+fwU6Fr7D NlYVLBp4cxQZaICEpmyK0MOFTPDLqkpPP+Uh8qjuMvcszZdn96+/wBpsqgYIerbWFssx D+Lw== X-Gm-Message-State: AOAM530v5dn0GpGjQ4FF2tIEKY53d61/U1iCELsuUJr1S57nnkXd6FTE jE0hv4uPDEp7UoPGqDoZe1X4/nD3Ax5C X-Google-Smtp-Source: ABdhPJx6a8q9Ai2mHAEaGDFBiensazfZswsy8JXOUdfo0t0jMG1kX20JChNmNj18Ut0VCu/2kGaktOVhfApp X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a17:90a:2b0f:b0:1bf:37d6:a49d with SMTP id x15-20020a17090a2b0f00b001bf37d6a49dmr6032803pjc.30.1646930750607; Thu, 10 Mar 2022 08:45:50 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:21 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-3-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 02/13] selftests: KVM: Test reading a single stat From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Retrieve the value of a single stat by name in the binary stats test to ensure the kvm_util library functions work. CC: Jing Zhang Signed-off-by: Ben Gardon --- .../selftests/kvm/include/kvm_util_base.h | 1 + .../selftests/kvm/kvm_binary_stats_test.c | 3 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 53 +++++++++++++++++++ 3 files changed, 57 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index c5f4a67772cb..09ee70c0df26 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -401,6 +401,7 @@ void assert_on_unhandled_exception(struct kvm_vm *vm, uint32_t vcpuid); int vm_get_stats_fd(struct kvm_vm *vm); int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid); void dump_vm_stats(struct kvm_vm *vm); +uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name); uint32_t guest_get_vcpuid(void); diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c index afc4701ce8dd..97bde355f105 100644 --- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c +++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c @@ -177,6 +177,9 @@ static void vm_stats_test(struct kvm_vm *vm) /* Dump VM stats */ dump_vm_stats(vm); + + /* Read a single stat. */ + printf("remote_tlb_flush: %lu\n", vm_get_single_stat(vm, "remote_tlb_flush")); } static void vcpu_stats_test(struct kvm_vm *vm, int vcpu_id) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 4d21c3b46780..1d3493d7fd55 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2699,3 +2699,56 @@ void dump_vm_stats(struct kvm_vm *vm) close(stats_fd); } +static int vm_get_stat_data(struct kvm_vm *vm, const char *stat_name, + uint64_t **data) +{ + struct kvm_stats_desc *stats_desc; + struct kvm_stats_header *header; + struct kvm_stats_desc *desc; + size_t size_desc; + int stats_fd; + int ret = -EINVAL; + int i; + + *data = NULL; + + stats_fd = vm_get_stats_fd(vm); + + header = read_vm_stats_header(stats_fd); + + stats_desc = read_vm_stats_desc(stats_fd, header); + + size_desc = stats_desc_size(header); + + /* Read kvm stats data one by one */ + for (i = 0; i < header->num_desc; ++i) { + desc = (void *)stats_desc + (i * size_desc); + + if (strcmp(desc->name, stat_name)) + continue; + + ret = read_stat_data(stats_fd, header, desc, data); + } + + free(stats_desc); + free(header); + + close(stats_fd); + + return ret; +} + +uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name) +{ + uint64_t *data; + uint64_t value; + int ret; + + ret = vm_get_stat_data(vm, stat_name, &data); + TEST_ASSERT(ret == 1, "Stat %s expected to have 1 element, but has %d", + stat_name, ret); + value = *data; + free(data); + return value; +} + From patchwork Thu Mar 10 16:45:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 296D6C433F5 for ; Thu, 10 Mar 2022 16:46:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239663AbiCJQrB (ORCPT ); Thu, 10 Mar 2022 11:47:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243508AbiCJQq5 (ORCPT ); Thu, 10 Mar 2022 11:46:57 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CD8B15F603 for ; Thu, 10 Mar 2022 08:45:54 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id q8-20020a170902f78800b00151cc484688so2963220pln.20 for ; Thu, 10 Mar 2022 08:45:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XkXBjRimFUoMH1vwPvQrnoC3f697jtIaVLXFmIHsGU8=; b=gpaGElbXViqunf8GjaCp/toD8XnxRtatZGb+DT2T8atyXuYTKEXuT18ZkiEKgL7cbh +IEp1FMjYeDO2pi14BP0oD4R/wjpGhd56bK3tlubHlPsEkzOx4dc3qqszwaTEh/qY9KT kK05oN5C+Abd2wY0r1nKrZGVYaJxG3EzIki/k4QV4Dc0ujKaIJwukr7yhsuUZUfK2Mmt w0dgaJ0D9tFxFxupySvDIbPNt6wn/WOGqUXHh4vradxg4B1oEijClWptJplgcPx46elW hHLnRZ+tgN9b1nn+Wvmo0Djz9HPInf4rSq9gNKwzwQiZpXHbzwh38jMwvFLtX/zpNxwm jbtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XkXBjRimFUoMH1vwPvQrnoC3f697jtIaVLXFmIHsGU8=; b=gNue2Ds5BX4kvnY6tK+bVKUGd45MMTGNoKwmKQrRgehuV4ZIZpwSrAxCZHm5sq9iTH LStbu7exGQ4b2xR3DB+gCHmre1NEzTmHdJJsMaQBY6Tt0F/nj4MCGAZ4pkQ1vTDdoEDy dICI9yO9cKcetugHRBywpPf2ShdWlZI69FAlSw0ddiMeWbLK4MDlrd7FGxF2TI/JF19m /ZrXdzPj6sGkql5ASjqZBslS3cscqNs9ERSugh8WPpp4Y2emE/Bs6BQG6F0IkhJx+YyC TCWaJZPDHXGJo8ghQCKDVX6grrLGphvd71aqdKXi1XxlwtC/r3ycf3ICql6tuIl/UmDP EVIQ== X-Gm-Message-State: AOAM532J46b0tju9RXVYH6V9cX0c/WiapPH7hNd7maxe5zJz17D3sLat oiuBjyzDK7xETPFRr6kvEq/ko2pX5Rh+ X-Google-Smtp-Source: ABdhPJzmBZT4bN2YKatPgWu5p34O+/TWn3u4BrAOlfMnXO9jDTOCx4218vWVnswgZf8Qwv+MyGysBOSf+yEG X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a05:6a00:b83:b0:4f7:374c:10ef with SMTP id g3-20020a056a000b8300b004f7374c10efmr5859634pfj.31.1646930753628; Thu, 10 Mar 2022 08:45:53 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:22 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-4-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 03/13] selftests: KVM: Wrap memslot IDs in a struct for readability From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In many places in the KVM selftests, memslots are referred to by raw integer IDs. This makes it very difficult to read which argument to a function is which. Wrap the memslot ID in a struct and provide an easy macro to create the structs. This should make the code much clearer and points out where memslot 0 is tacitly used in many library functions. No functional change intended. Signed-off-by: Ben Gardon --- .../selftests/kvm/dirty_log_perf_test.c | 7 +- tools/testing/selftests/kvm/dirty_log_test.c | 43 +++++---- .../selftests/kvm/include/kvm_util_base.h | 42 +++++---- .../selftests/kvm/include/x86_64/vmx.h | 4 +- .../selftests/kvm/kvm_page_table_test.c | 9 +- .../selftests/kvm/lib/aarch64/processor.c | 2 +- tools/testing/selftests/kvm/lib/kvm_util.c | 88 ++++++++++--------- .../selftests/kvm/lib/kvm_util_internal.h | 2 +- .../selftests/kvm/lib/perf_test_util.c | 4 +- .../selftests/kvm/lib/riscv/processor.c | 2 +- .../selftests/kvm/lib/s390x/processor.c | 6 +- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 4 +- .../selftests/kvm/max_guest_memory_test.c | 6 +- .../kvm/memslot_modification_stress_test.c | 6 +- .../testing/selftests/kvm/memslot_perf_test.c | 11 ++- .../selftests/kvm/set_memory_region_test.c | 8 +- tools/testing/selftests/kvm/steal_time.c | 3 +- .../kvm/x86_64/emulator_error_test.c | 2 +- .../selftests/kvm/x86_64/mmu_role_test.c | 3 +- tools/testing/selftests/kvm/x86_64/smm_test.c | 2 +- .../selftests/kvm/x86_64/vmx_dirty_log_test.c | 10 +-- .../selftests/kvm/x86_64/xen_shinfo_test.c | 4 +- .../selftests/kvm/x86_64/xen_vmcall_test.c | 2 +- 23 files changed, 151 insertions(+), 119 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 101759ac93a4..04817f65cf18 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -101,7 +101,7 @@ static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable) int slot = PERF_TEST_MEM_SLOT_INDEX + i; int flags = enable ? KVM_MEM_LOG_DIRTY_PAGES : 0; - vm_mem_region_set_flags(vm, slot, flags); + vm_mem_region_set_flags(vm, MEMSLOT(slot), flags); } } @@ -122,7 +122,7 @@ static void get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots for (i = 0; i < slots; i++) { int slot = PERF_TEST_MEM_SLOT_INDEX + i; - kvm_vm_get_dirty_log(vm, slot, bitmaps[i]); + kvm_vm_get_dirty_log(vm, MEMSLOT(slot), bitmaps[i]); } } @@ -134,7 +134,8 @@ static void clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], for (i = 0; i < slots; i++) { int slot = PERF_TEST_MEM_SLOT_INDEX + i; - kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], 0, pages_per_slot); + kvm_vm_clear_dirty_log(vm, MEMSLOT(slot), bitmaps[i], 0, + pages_per_slot); } } diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 3fcd89e195c7..1241c9a2729c 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -26,7 +26,7 @@ #define VCPU_ID 1 /* The memory slot index to track dirty pages */ -#define TEST_MEM_SLOT_INDEX 1 +#define TEST_MEMSLOT MEMSLOT(1) /* Default guest test virtual memory offset */ #define DEFAULT_GUEST_TEST_MEM 0xc0000000 @@ -229,17 +229,19 @@ static void clear_log_create_vm_done(struct kvm_vm *vm) vm_enable_cap(vm, &cap); } -static void dirty_log_collect_dirty_pages(struct kvm_vm *vm, int slot, +static void dirty_log_collect_dirty_pages(struct kvm_vm *vm, + struct kvm_memslot memslot, void *bitmap, uint32_t num_pages) { - kvm_vm_get_dirty_log(vm, slot, bitmap); + kvm_vm_get_dirty_log(vm, memslot, bitmap); } -static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot, +static void clear_log_collect_dirty_pages(struct kvm_vm *vm, + struct kvm_memslot memslot, void *bitmap, uint32_t num_pages) { - kvm_vm_get_dirty_log(vm, slot, bitmap); - kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages); + kvm_vm_get_dirty_log(vm, memslot, bitmap); + kvm_vm_clear_dirty_log(vm, memslot, bitmap, 0, num_pages); } /* Should only be called after a GUEST_SYNC */ @@ -293,7 +295,7 @@ static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn) } static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns, - int slot, void *bitmap, + struct kvm_memslot memslot, void *bitmap, uint32_t num_pages, uint32_t *fetch_index) { struct kvm_dirty_gfn *cur; @@ -303,8 +305,9 @@ static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns, cur = &dirty_gfns[*fetch_index % test_dirty_ring_count]; if (!dirty_gfn_is_dirtied(cur)) break; - TEST_ASSERT(cur->slot == slot, "Slot number didn't match: " - "%u != %u", cur->slot, slot); + TEST_ASSERT(cur->slot == memslot.id, + "Slot number didn't match: %u != %u", + cur->slot, memslot.id); TEST_ASSERT(cur->offset < num_pages, "Offset overflow: " "0x%llx >= 0x%x", cur->offset, num_pages); //pr_info("fetch 0x%x page %llu\n", *fetch_index, cur->offset); @@ -331,7 +334,8 @@ static void dirty_ring_continue_vcpu(void) sem_post(&sem_vcpu_cont); } -static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot, +static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, + struct kvm_memslot memslot, void *bitmap, uint32_t num_pages) { /* We only have one vcpu */ @@ -352,7 +356,7 @@ static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot, /* Only have one vcpu */ count = dirty_ring_collect_one(vcpu_map_dirty_ring(vm, VCPU_ID), - slot, bitmap, num_pages, &fetch_index); + memslot, bitmap, num_pages, &fetch_index); cleared = kvm_vm_reset_dirty_ring(vm); @@ -408,8 +412,9 @@ struct log_mode { /* Hook when the vm creation is done (before vcpu creation) */ void (*create_vm_done)(struct kvm_vm *vm); /* Hook to collect the dirty pages into the bitmap provided */ - void (*collect_dirty_pages) (struct kvm_vm *vm, int slot, - void *bitmap, uint32_t num_pages); + void (*collect_dirty_pages)(struct kvm_vm *vm, + struct kvm_memslot memslot, + void *bitmap, uint32_t num_pages); /* Hook to call when after each vcpu run */ void (*after_vcpu_run)(struct kvm_vm *vm, int ret, int err); void (*before_vcpu_join) (void); @@ -473,14 +478,15 @@ static void log_mode_create_vm_done(struct kvm_vm *vm) mode->create_vm_done(vm); } -static void log_mode_collect_dirty_pages(struct kvm_vm *vm, int slot, +static void log_mode_collect_dirty_pages(struct kvm_vm *vm, + struct kvm_memslot memslot, void *bitmap, uint32_t num_pages) { struct log_mode *mode = &log_modes[host_log_mode]; TEST_ASSERT(mode->collect_dirty_pages != NULL, "collect_dirty_pages() is required for any log mode!"); - mode->collect_dirty_pages(vm, slot, bitmap, num_pages); + mode->collect_dirty_pages(vm, memslot, bitmap, num_pages); } static void log_mode_after_vcpu_run(struct kvm_vm *vm, int ret, int err) @@ -755,8 +761,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) /* Add an extra memory slot for testing dirty logging */ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, guest_test_phys_mem, - TEST_MEM_SLOT_INDEX, - guest_num_pages, + TEST_MEMSLOT, guest_num_pages, KVM_MEM_LOG_DIRTY_PAGES); /* Do mapping for the dirty track memory slot */ @@ -786,8 +791,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) while (iteration < p->iterations) { /* Give the vcpu thread some time to dirty some pages */ usleep(p->interval * 1000); - log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX, - bmap, host_num_pages); + log_mode_collect_dirty_pages(vm, TEST_MEMSLOT, bmap, + host_num_pages); /* * See vcpu_sync_stop_requested definition for details on why diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 09ee70c0df26..69a6b5e509ab 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -27,6 +27,14 @@ */ struct kvm_vm; +/* Simple int wrapper to represent memslots for callers of kvm_util. */ +struct kvm_memslot { + uint32_t id; +}; + +#define MEMSLOT(_id) ((struct kvm_memslot){ .id = _id }) + + typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */ @@ -114,9 +122,10 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); void kvm_vm_free(struct kvm_vm *vmp); void kvm_vm_restart(struct kvm_vm *vmp, int perm); void kvm_vm_release(struct kvm_vm *vmp); -void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log); -void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log, - uint64_t first_page, uint32_t num_pages); +void kvm_vm_get_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot, + void *log); +void kvm_vm_clear_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot, + void *log, uint64_t first_page, uint32_t num_pages); uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm); int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva, @@ -148,14 +157,15 @@ void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, void vm_create_irqchip(struct kvm_vm *vm); -void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, - uint64_t gpa, uint64_t size, void *hva); -int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, - uint64_t gpa, uint64_t size, void *hva); +void vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot, + uint32_t flags, uint64_t gpa, uint64_t size, + void *hva); +int __vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot, + uint32_t flags, uint64_t gpa, uint64_t size, + void *hva); void vm_userspace_mem_region_add(struct kvm_vm *vm, - enum vm_mem_backing_src_type src_type, - uint64_t guest_paddr, uint32_t slot, uint64_t npages, - uint32_t flags); + enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, + struct kvm_memslot memslot, uint64_t npages, uint32_t flags); void vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid, unsigned long ioctl, void *arg); @@ -165,9 +175,11 @@ void vm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg); int _vm_ioctl(struct kvm_vm *vm, unsigned long cmd, void *arg); void kvm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg); int _kvm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg); -void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); -void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); -void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); +void vm_mem_region_set_flags(struct kvm_vm *vm, struct kvm_memslot memslot, + uint32_t flags); +void vm_mem_region_move(struct kvm_vm *vm, struct kvm_memslot memslot, + uint64_t new_gpa); +void vm_mem_region_delete(struct kvm_vm *vm, struct kvm_memslot memslot); void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); @@ -307,9 +319,9 @@ void virt_pgd_alloc(struct kvm_vm *vm); void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr); vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, - uint32_t memslot); + struct kvm_memslot memslot); vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot); + vm_paddr_t paddr_min, struct kvm_memslot memslot); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); /* diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h index 583ceb0d1457..cc1dd1f82a1d 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -612,9 +612,9 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uint64_t size); void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, - uint32_t memslot); + struct kvm_memslot memslot); void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, - uint32_t eptp_memslot); + struct kvm_memslot eptp_memslot); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm); #endif /* SELFTEST_KVM_VMX_H */ diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c index ba1fdc3dcf4a..74687be63e8a 100644 --- a/tools/testing/selftests/kvm/kvm_page_table_test.c +++ b/tools/testing/selftests/kvm/kvm_page_table_test.c @@ -22,7 +22,7 @@ #include "processor.h" #include "guest_modes.h" -#define TEST_MEM_SLOT_INDEX 1 +#define TEST_MEMSLOT MEMSLOT(1) /* Default size(1GB) of the memory for testing */ #define DEFAULT_TEST_MEM_SIZE (1 << 30) @@ -300,7 +300,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg) /* Add an extra memory slot with specified backing src type */ vm_userspace_mem_region_add(vm, src_type, guest_test_phys_mem, - TEST_MEM_SLOT_INDEX, guest_num_pages, 0); + TEST_MEMSLOT, guest_num_pages, 0); /* Do mapping(GVA->GPA) for the testing memory slot */ virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages); @@ -398,8 +398,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) ts_diff.tv_sec, ts_diff.tv_nsec); /* Test the stage of KVM updating mappings */ - vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX, - KVM_MEM_LOG_DIRTY_PAGES); + vm_mem_region_set_flags(vm, TEST_MEMSLOT, KVM_MEM_LOG_DIRTY_PAGES); *current_stage = KVM_UPDATE_MAPPINGS; @@ -411,7 +410,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) ts_diff.tv_sec, ts_diff.tv_nsec); /* Test the stage of KVM adjusting mappings */ - vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX, 0); + vm_mem_region_set_flags(vm, TEST_MEMSLOT, 0); *current_stage = KVM_ADJUST_MAPPINGS; diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 9343d82519b4..a9e505e351e0 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -80,7 +80,7 @@ void virt_pgd_alloc(struct kvm_vm *vm) if (!vm->pgd_created) { vm_paddr_t paddr = vm_phy_pages_alloc(vm, page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, MEMSLOT(0)); vm->pgd = paddr; vm->pgd_created = true; } diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1d3493d7fd55..97d1badaba8b 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -357,7 +357,7 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) vm->vpages_mapped = sparsebit_alloc(); if (phy_pages != 0) vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - 0, 0, phy_pages, 0); + 0, MEMSLOT(0), phy_pages, 0); return vm; } @@ -488,9 +488,10 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm) } } -void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log) +void kvm_vm_get_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot, + void *log) { - struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = slot }; + struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = memslot.id }; int ret; ret = ioctl(vm->fd, KVM_GET_DIRTY_LOG, &args); @@ -498,11 +499,11 @@ void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log) __func__, strerror(-ret)); } -void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log, - uint64_t first_page, uint32_t num_pages) +void kvm_vm_clear_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot, + void *log, uint64_t first_page, uint32_t num_pages) { struct kvm_clear_dirty_log args = { - .dirty_bitmap = log, .slot = slot, + .dirty_bitmap = log, .slot = memslot.id, .first_page = first_page, .num_pages = num_pages }; @@ -861,11 +862,12 @@ static void vm_userspace_mem_region_hva_insert(struct rb_root *hva_tree, } -int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, - uint64_t gpa, uint64_t size, void *hva) +int __vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot, + uint32_t flags, uint64_t gpa, uint64_t size, + void *hva) { struct kvm_userspace_memory_region region = { - .slot = slot, + .slot = memslot.id, .flags = flags, .guest_phys_addr = gpa, .memory_size = size, @@ -875,10 +877,11 @@ int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags return ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, ®ion); } -void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, - uint64_t gpa, uint64_t size, void *hva) +void vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot, + uint32_t flags, uint64_t gpa, uint64_t size, + void *hva) { - int ret = __vm_set_user_memory_region(vm, slot, flags, gpa, size, hva); + int ret = __vm_set_user_memory_region(vm, memslot, flags, gpa, size, hva); TEST_ASSERT(!ret, "KVM_SET_USER_MEMORY_REGION failed, errno = %d (%s)", errno, strerror(errno)); @@ -892,7 +895,7 @@ void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, * src_type - Storage source for this region. * NULL to use anonymous memory. * guest_paddr - Starting guest physical address - * slot - KVM region slot + * memslot - KVM region slot * npages - Number of physical pages * flags - KVM memory region flags (e.g. KVM_MEM_LOG_DIRTY_PAGES) * @@ -907,9 +910,8 @@ void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, * region is created with the flags given by flags. */ void vm_userspace_mem_region_add(struct kvm_vm *vm, - enum vm_mem_backing_src_type src_type, - uint64_t guest_paddr, uint32_t slot, uint64_t npages, - uint32_t flags) + enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, + struct kvm_memslot memslot, uint64_t npages, uint32_t flags) { int ret; struct userspace_mem_region *region; @@ -949,15 +951,15 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, /* Confirm no region with the requested slot already exists. */ hash_for_each_possible(vm->regions.slot_hash, region, slot_node, - slot) { - if (region->region.slot != slot) + memslot.id) { + if (region->region.slot != memslot.id) continue; TEST_FAIL("A mem region with the requested slot " "already exists.\n" " requested slot: %u paddr: 0x%lx npages: 0x%lx\n" " existing slot: %u paddr: 0x%lx size: 0x%lx", - slot, guest_paddr, npages, + memslot.id, guest_paddr, npages, region->region.slot, (uint64_t) region->region.guest_phys_addr, (uint64_t) region->region.memory_size); @@ -1024,7 +1026,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->unused_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, guest_paddr >> vm->page_shift, npages); - region->region.slot = slot; + region->region.slot = memslot.id; region->region.flags = flags; region->region.guest_phys_addr = guest_paddr; region->region.memory_size = npages * vm->page_size; @@ -1034,13 +1036,13 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, " rc: %i errno: %i\n" " slot: %u flags: 0x%x\n" " guest_phys_addr: 0x%lx size: 0x%lx", - ret, errno, slot, flags, + ret, errno, memslot.id, flags, guest_paddr, (uint64_t) region->region.memory_size); /* Add to quick lookup data structures */ vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region); vm_userspace_mem_region_hva_insert(&vm->regions.hva_tree, region); - hash_add(vm->regions.slot_hash, ®ion->slot_node, slot); + hash_add(vm->regions.slot_hash, ®ion->slot_node, memslot.id); /* If shared memory, create an alias. */ if (region->fd >= 0) { @@ -1072,17 +1074,17 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, * memory slot ID). */ struct userspace_mem_region * -memslot2region(struct kvm_vm *vm, uint32_t memslot) +memslot2region(struct kvm_vm *vm, struct kvm_memslot memslot) { struct userspace_mem_region *region; hash_for_each_possible(vm->regions.slot_hash, region, slot_node, - memslot) - if (region->region.slot == memslot) + memslot.id) + if (region->region.slot == memslot.id) return region; fprintf(stderr, "No mem region with the requested slot found,\n" - " requested slot: %u\n", memslot); + " requested slot: %u\n", memslot.id); fputs("---- vm dump ----\n", stderr); vm_dump(stderr, vm, 2); TEST_FAIL("Mem region not found"); @@ -1103,12 +1105,13 @@ memslot2region(struct kvm_vm *vm, uint32_t memslot) * Sets the flags of the memory region specified by the value of slot, * to the values given by flags. */ -void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags) +void vm_mem_region_set_flags(struct kvm_vm *vm, struct kvm_memslot memslot, + uint32_t flags) { int ret; struct userspace_mem_region *region; - region = memslot2region(vm, slot); + region = memslot2region(vm, memslot); region->region.flags = flags; @@ -1116,7 +1119,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags) TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed,\n" " rc: %i errno: %i slot: %u flags: 0x%x", - ret, errno, slot, flags); + ret, errno, memslot.id, flags); } /* @@ -1124,7 +1127,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags) * * Input Args: * vm - Virtual Machine - * slot - Slot of the memory region to move + * memslot - Memslot of the memory region to move * new_gpa - Starting guest physical address * * Output Args: None @@ -1133,12 +1136,13 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags) * * Change the gpa of a memory region. */ -void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa) +void vm_mem_region_move(struct kvm_vm *vm, struct kvm_memslot memslot, + uint64_t new_gpa) { struct userspace_mem_region *region; int ret; - region = memslot2region(vm, slot); + region = memslot2region(vm, memslot); region->region.guest_phys_addr = new_gpa; @@ -1146,7 +1150,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa) TEST_ASSERT(!ret, "KVM_SET_USER_MEMORY_REGION failed\n" "ret: %i errno: %i slot: %u new_gpa: 0x%lx", - ret, errno, slot, new_gpa); + ret, errno, memslot.id, new_gpa); } /* @@ -1154,7 +1158,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa) * * Input Args: * vm - Virtual Machine - * slot - Slot of the memory region to delete + * memslot - Memslot of the memory region to delete * * Output Args: None * @@ -1162,9 +1166,9 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa) * * Delete a memory region. */ -void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot) +void vm_mem_region_delete(struct kvm_vm *vm, struct kvm_memslot memslot) { - __vm_mem_region_delete(vm, memslot2region(vm, slot), true); + __vm_mem_region_delete(vm, memslot2region(vm, memslot), true); } /* @@ -1356,7 +1360,8 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) virt_pgd_alloc(vm); vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages, - KVM_UTIL_MIN_PFN * vm->page_size, 0); + KVM_UTIL_MIN_PFN * vm->page_size, + MEMSLOT(0)); /* * Find an unused range of virtual page addresses of at least @@ -2377,7 +2382,7 @@ const char *exit_reason_str(unsigned int exit_reason) * not enough pages are available at or above paddr_min. */ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot) + vm_paddr_t paddr_min, struct kvm_memslot memslot) { struct userspace_mem_region *region; sparsebit_idx_t pg, base; @@ -2404,7 +2409,7 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, if (pg == 0) { fprintf(stderr, "No guest physical page available, " "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", - paddr_min, vm->page_size, memslot); + paddr_min, vm->page_size, memslot.id); fputs("---- vm dump ----\n", stderr); vm_dump(stderr, vm, 2); abort(); @@ -2417,7 +2422,7 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, } vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, - uint32_t memslot) + struct kvm_memslot memslot) { return vm_phy_pages_alloc(vm, 1, paddr_min, memslot); } @@ -2427,7 +2432,8 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + MEMSLOT(0)); } /* diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h index a03febc24ba6..386ad653391c 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h +++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h @@ -123,6 +123,6 @@ void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent); void sregs_dump(FILE *stream, struct kvm_sregs *sregs, uint8_t indent); struct userspace_mem_region * -memslot2region(struct kvm_vm *vm, uint32_t memslot); +memslot2region(struct kvm_vm *vm, struct kvm_memslot memslot); #endif /* SELFTEST_KVM_UTIL_INTERNAL_H */ diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 722df3a28791..e19bb2b66bc5 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -169,8 +169,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, vm_paddr_t region_start = pta->gpa + region_pages * pta->guest_page_size * i; vm_userspace_mem_region_add(vm, backing_src, region_start, - PERF_TEST_MEM_SLOT_INDEX + i, - region_pages, 0); + MEMSLOT(PERF_TEST_MEM_SLOT_INDEX + i), + region_pages, 0); } /* Do mapping for the demand paging memory slot */ diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index d377f2603d98..7a0ff26b9f8d 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -59,7 +59,7 @@ void virt_pgd_alloc(struct kvm_vm *vm) if (!vm->pgd_created) { vm_paddr_t paddr = vm_phy_pages_alloc(vm, page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, MEMSLOT(0)); vm->pgd = paddr; vm->pgd_created = true; } diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c index f87c7137598e..1c873a26e6de 100644 --- a/tools/testing/selftests/kvm/lib/s390x/processor.c +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c @@ -22,7 +22,8 @@ void virt_pgd_alloc(struct kvm_vm *vm) return; paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + MEMSLOT(0)); memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size); vm->pgd = paddr; @@ -39,7 +40,8 @@ static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri) uint64_t taddr; taddr = vm_phy_pages_alloc(vm, ri < 4 ? PAGES_PER_REGION : 1, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + MEMSLOT(0)); memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size); return (taddr & REGION_ENTRY_ORIGIN) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index d089d8b850b5..7ea9455b3e71 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -505,7 +505,7 @@ void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, * physical pages in VM. */ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, - uint32_t memslot) + struct kvm_memslot memslot) { sparsebit_idx_t i, last; struct userspace_mem_region *region = @@ -526,7 +526,7 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, } void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, - uint32_t eptp_memslot) + struct kvm_memslot eptp_memslot) { vmx->eptp = (void *)vm_vaddr_alloc_page(vm); vmx->eptp_hva = addr_gva2hva(vm, (uintptr_t)vmx->eptp); diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c index 3875c4b23a04..a946a90604ea 100644 --- a/tools/testing/selftests/kvm/max_guest_memory_test.c +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c @@ -239,7 +239,8 @@ int main(int argc, char *argv[]) if ((gpa - start_gpa) >= max_mem) break; - vm_set_user_memory_region(vm, slot, 0, gpa, slot_size, mem); + vm_set_user_memory_region(vm, MEMSLOT(slot), + 0, gpa, slot_size, mem); #ifdef __x86_64__ /* Identity map memory in the guest using 1gb pages. */ @@ -277,7 +278,8 @@ int main(int argc, char *argv[]) * references to the removed regions. */ for (slot = (slot - 1) & ~1ull; slot >= first_slot; slot -= 2) - vm_set_user_memory_region(vm, slot, 0, 0, 0, NULL); + vm_set_user_memory_region(vm, MEMSLOT(slot), + 0, 0, 0, NULL); munmap(mem, slot_size / 2); diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c index 1410d0a9141a..465f24ac7b88 100644 --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c @@ -26,7 +26,7 @@ #include "test_util.h" #include "guest_modes.h" -#define DUMMY_MEMSLOT_INDEX 7 +#define DUMMY_MEMSLOT MEMSLOT(7) #define DEFAULT_MEMSLOT_MODIFICATION_ITERATIONS 10 @@ -81,9 +81,9 @@ static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay, for (i = 0; i < nr_modifications; i++) { usleep(delay); vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa, - DUMMY_MEMSLOT_INDEX, pages, 0); + DUMMY_MEMSLOT, pages, 0); - vm_mem_region_delete(vm, DUMMY_MEMSLOT_INDEX); + vm_mem_region_delete(vm, DUMMY_MEMSLOT); } } diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index 1727f75e0c2c..a18e3a7a19c8 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -293,7 +293,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, npages += rempages; vm_userspace_mem_region_add(data->vm, VM_MEM_SRC_ANONYMOUS, - guest_addr, slot, npages, + guest_addr, MEMSLOT(slot), npages, 0); guest_addr += npages * 4096; } @@ -308,7 +308,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, npages += rempages; gpa = vm_phy_pages_alloc(data->vm, npages, guest_addr, - slot + 1); + MEMSLOT(slot + 1)); TEST_ASSERT(gpa == guest_addr, "vm_phy_pages_alloc() failed\n"); @@ -586,9 +586,12 @@ static void test_memslot_move_loop(struct vm_data *data, struct sync_area *sync) uint64_t movesrcgpa; movesrcgpa = vm_slot2gpa(data, data->nslots - 1); - vm_mem_region_move(data->vm, data->nslots - 1 + 1, + vm_mem_region_move(data->vm, + MEMSLOT(data->nslots - 1 + 1), MEM_TEST_MOVE_GPA_DEST); - vm_mem_region_move(data->vm, data->nslots - 1 + 1, movesrcgpa); + vm_mem_region_move(data->vm, + MEMSLOT(data->nslots - 1 + 1), + movesrcgpa); } static void test_memslot_do_unmap(struct vm_data *data, diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c index 73bc297dabe6..aca694607165 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -31,7 +31,7 @@ * Somewhat arbitrary location and slot, intended to not overlap anything. */ #define MEM_REGION_GPA 0xc0000000 -#define MEM_REGION_SLOT 10 +#define MEM_REGION_SLOT MEMSLOT(10) static const uint64_t MMIO_VAL = 0xbeefull; @@ -282,7 +282,7 @@ static void test_delete_memory_region(void) * Delete the primary memslot. This should cause an emulation error or * shutdown due to the page tables getting nuked. */ - vm_mem_region_delete(vm, 0); + vm_mem_region_delete(vm, MEMSLOT(0)); pthread_join(vcpu_thread, NULL); @@ -367,7 +367,7 @@ static void test_add_max_memory_regions(void) mem_aligned = (void *)(((size_t) mem + alignment - 1) & ~(alignment - 1)); for (slot = 0; slot < max_mem_slots; slot++) - vm_set_user_memory_region(vm, slot, 0, + vm_set_user_memory_region(vm, MEMSLOT(slot), 0, ((uint64_t)slot * MEM_REGION_SIZE), MEM_REGION_SIZE, mem_aligned + (uint64_t)slot * MEM_REGION_SIZE); @@ -377,7 +377,7 @@ static void test_add_max_memory_regions(void) MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); TEST_ASSERT(mem_extra != MAP_FAILED, "Failed to mmap() host"); - ret = __vm_set_user_memory_region(vm, max_mem_slots, 0, + ret = __vm_set_user_memory_region(vm, MEMSLOT(max_mem_slots), 0, (uint64_t)max_mem_slots * MEM_REGION_SIZE, MEM_REGION_SIZE, mem_extra); TEST_ASSERT(ret == -1 && errno == EINVAL, diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c index 62f2eb9ee3d5..a3d5c0407506 100644 --- a/tools/testing/selftests/kvm/steal_time.c +++ b/tools/testing/selftests/kvm/steal_time.c @@ -276,7 +276,8 @@ int main(int ac, char **av) /* Create a one VCPU guest and an identity mapped memslot for the steal time structure */ vm = vm_create_default(0, 0, guest_code); gpages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, STEAL_TIME_SIZE * NR_VCPUS); - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE, 1, gpages, 0); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE, + MEMSLOT(1), gpages, 0); virt_map(vm, ST_GPA_BASE, ST_GPA_BASE, gpages); ucall_init(vm, NULL); diff --git a/tools/testing/selftests/kvm/x86_64/emulator_error_test.c b/tools/testing/selftests/kvm/x86_64/emulator_error_test.c index f070ff0224fa..fe2d78313878 100644 --- a/tools/testing/selftests/kvm/x86_64/emulator_error_test.c +++ b/tools/testing/selftests/kvm/x86_64/emulator_error_test.c @@ -17,7 +17,7 @@ #define MEM_REGION_GVA 0x0000123456789000 #define MEM_REGION_GPA 0x0000000700000000 -#define MEM_REGION_SLOT 10 +#define MEM_REGION_SLOT MEMSLOT(10) #define MEM_REGION_SIZE PAGE_SIZE static void guest_code(void) diff --git a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c index da2325fcad87..9ff6bc4c278d 100644 --- a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c +++ b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c @@ -66,7 +66,8 @@ static void mmu_role_test(u32 *cpuid_reg, u32 evil_cpuid_val) * KVM x86 zaps all shadow pages on memslot deletion. */ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - MMIO_GPA << 1, 10, 1, 0); + MMIO_GPA << 1, + MEMSLOT(10), 1, 0); /* Set up a #PF handler to eat the RSVD #PF and signal all done! */ vm_init_descriptor_tables(vm); diff --git a/tools/testing/selftests/kvm/x86_64/smm_test.c b/tools/testing/selftests/kvm/x86_64/smm_test.c index a626d40fdb48..3de2958106f7 100644 --- a/tools/testing/selftests/kvm/x86_64/smm_test.c +++ b/tools/testing/selftests/kvm/x86_64/smm_test.c @@ -24,7 +24,7 @@ #define PAGE_SIZE 4096 #define SMRAM_SIZE 65536 -#define SMRAM_MEMSLOT ((1 << 16) | 1) +#define SMRAM_MEMSLOT MEMSLOT((1 << 16) | 1) #define SMRAM_PAGES (SMRAM_SIZE / PAGE_SIZE) #define SMRAM_GPA 0x1000000 #define SMRAM_STAGE 0xfe diff --git a/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c b/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c index 68f26a8b4f42..9adba67c1e1c 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c @@ -20,7 +20,7 @@ #define VCPU_ID 1 /* The memory slot index to track dirty pages */ -#define TEST_MEM_SLOT_INDEX 1 +#define TEST_MEMSLOT MEMSLOT(1) #define TEST_MEM_PAGES 3 /* L1 guest test virtual memory offset */ @@ -89,7 +89,7 @@ int main(int argc, char *argv[]) /* Add an extra memory slot for testing dirty logging */ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, GUEST_TEST_MEM, - TEST_MEM_SLOT_INDEX, + TEST_MEMSLOT, TEST_MEM_PAGES, KVM_MEM_LOG_DIRTY_PAGES); @@ -106,8 +106,8 @@ int main(int argc, char *argv[]) * Note that prepare_eptp should be called only L1's GPA map is done, * meaning after the last call to virt_map. */ - prepare_eptp(vmx, vm, 0); - nested_map_memslot(vmx, vm, 0); + prepare_eptp(vmx, vm, MEMSLOT(0)); + nested_map_memslot(vmx, vm, MEMSLOT(0)); nested_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, 4096); nested_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, 4096); @@ -132,7 +132,7 @@ int main(int argc, char *argv[]) * The nested guest wrote at offset 0x1000 in the memslot, but the * dirty bitmap must be filled in according to L1 GPA, not L2. */ - kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap); + kvm_vm_get_dirty_log(vm, TEST_MEMSLOT, bmap); if (uc.args[1]) { TEST_ASSERT(test_bit(0, bmap), "Page 0 incorrectly reported clean\n"); TEST_ASSERT(host_test_mem[0] == 1, "Page 0 not written by guest\n"); diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c index d9d9d1deec45..37f173a0f189 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c @@ -22,11 +22,11 @@ #define SHINFO_REGION_GVA 0xc0000000ULL #define SHINFO_REGION_GPA 0xc0000000ULL -#define SHINFO_REGION_SLOT 10 +#define SHINFO_REGION_SLOT MEMSLOT(10) #define PAGE_SIZE 4096 #define DUMMY_REGION_GPA (SHINFO_REGION_GPA + (2 * PAGE_SIZE)) -#define DUMMY_REGION_SLOT 11 +#define DUMMY_REGION_SLOT MEMSLOT(11) #define SHINFO_ADDR (SHINFO_REGION_GPA) #define PVTIME_ADDR (SHINFO_REGION_GPA + PAGE_SIZE) diff --git a/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c b/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c index adc94452b57c..edf5f5600766 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c @@ -14,7 +14,7 @@ #define VCPU_ID 5 #define HCALL_REGION_GPA 0xc0000000ULL -#define HCALL_REGION_SLOT 10 +#define HCALL_REGION_SLOT MEMSLOT(10) #define PAGE_SIZE 4096 static struct kvm_vm *vm; From patchwork Thu Mar 10 16:45:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D5D7C433EF for ; Thu, 10 Mar 2022 16:46:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243846AbiCJQrF (ORCPT ); Thu, 10 Mar 2022 11:47:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240945AbiCJQrA (ORCPT ); Thu, 10 Mar 2022 11:47:00 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61415163D4C for ; Thu, 10 Mar 2022 08:45:57 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id v1-20020a25fc01000000b006289a83ed20so4880134ybd.7 for ; Thu, 10 Mar 2022 08:45:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LgzUuiV3i4GTw4ek7OK0jYRmP4wJZt05xSkMjC1DGeY=; b=raZMY8NwGJVOQ4m+NWTLQtqfiIxGD4q/xwlNolusyOtrRRxgHlsRDT0ZtZBYbor69Q tNsPTILei0AP5B2S4fIGSFJHoDRXgoo2wfjGd8G31y5knsy4gXURr6yY5ZfSiNX3zUR5 0J2aXv7APhaz2CAW0VLQQMttzdOmAK/xyZNdAowtCibemE1lmRagN9rfqER8s+dvQXIg 6BDTQ6kgrN9uqIqdAGlVeuiM+QOIvQaB0fnSkZRVt/mZ5megnHvwdUqj0JTG2tULRQvE juqvZ9/3PrCEhwxo6+dPwuruiESPJXQGtts9qj3a/vVVcGY6vkvHBEkO0m3mQsaarzEU Vv5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LgzUuiV3i4GTw4ek7OK0jYRmP4wJZt05xSkMjC1DGeY=; b=YS8jONK5195xGwCFoH2n3atU/d4KQ5NDTCxrSKJqH3ntKuu0WlEOlYcx8DxDflLZil Epk+B1eRk7J0YfHsEORjtkk6gT8Cpi/2yoI2sLZvf14KPvlb+tHR9geHasseblo3mS15 HwbMFsVpgkDFGAScX47xkEITHk+mYjyuZlaWKrWKD9D5K/R4sM/mA4eprenzE1nym9K5 XN8KVUaBR5IZJSoo+6v71MGtyl7TAdADDWR/uuTLQeHpfxLKuCrLRvc0wdUTTVOw1aa8 XqS2fn9YfVE9BfXDYEnvZJFYRgj1G53LRElLSM3Ap1rsp5uglCA4xn6qGIe3QlFwoJlM W4JA== X-Gm-Message-State: AOAM530ukuojFU3qZXg/0ePYk87ORsI9FNzxvVWLffQI3AydjGQl/+DP lM/0eO5djPYO5s0g8ptAQCS9wo6xB2CP X-Google-Smtp-Source: ABdhPJzmpQbXz1Us8j+DQXhr8RJuWVh+zTgua5lJQX3HkOChlmTKcQft7+VABR/f+362DLfhc0OKa/oBeBei X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a81:9844:0:b0:2db:db74:f7db with SMTP id p65-20020a819844000000b002dbdb74f7dbmr4719925ywg.359.1646930756373; Thu, 10 Mar 2022 08:45:56 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:23 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-5-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 04/13] selftests: KVM: Add memslot parameter to VM vaddr allocation From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, the vm_vaddr_alloc functions implicitly allocate in memslot 0. Add an argument to allow allocations in any memslot. This will be used in future commits to allow loading code in a test memslot with different backing memory types. No functional change intended. Signed-off-by: Ben Gardon --- .../selftests/kvm/include/kvm_util_base.h | 8 ++++--- .../selftests/kvm/lib/aarch64/processor.c | 5 +++-- tools/testing/selftests/kvm/lib/elf.c | 3 ++- tools/testing/selftests/kvm/lib/kvm_util.c | 17 +++++++------- .../selftests/kvm/lib/riscv/processor.c | 3 ++- .../selftests/kvm/lib/s390x/processor.c | 3 ++- .../selftests/kvm/lib/x86_64/processor.c | 11 +++++----- tools/testing/selftests/kvm/lib/x86_64/svm.c | 8 +++---- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 22 +++++++++---------- tools/testing/selftests/kvm/x86_64/amx_test.c | 6 ++--- .../testing/selftests/kvm/x86_64/cpuid_test.c | 2 +- .../selftests/kvm/x86_64/hyperv_clock.c | 2 +- .../selftests/kvm/x86_64/hyperv_features.c | 6 ++--- .../selftests/kvm/x86_64/kvm_clock_test.c | 2 +- .../selftests/kvm/x86_64/xapic_ipi_test.c | 2 +- 15 files changed, 54 insertions(+), 46 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 69a6b5e509ab..f70dfa3e1202 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -181,9 +181,11 @@ void vm_mem_region_move(struct kvm_vm *vm, struct kvm_memslot memslot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, struct kvm_memslot memslot); void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid); -vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); -vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); -vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); +vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + struct kvm_memslot memslot); +vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages, + struct kvm_memslot memslot); +vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm, struct kvm_memslot memslot); void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, unsigned int npages); diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index a9e505e351e0..163746259d93 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -322,7 +322,8 @@ void aarch64_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, DEFAULT_STACK_PGS * vm->page_size : vm->page_size; uint64_t stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_ARM64_GUEST_STACK_VADDR_MIN); + DEFAULT_ARM64_GUEST_STACK_VADDR_MIN, + MEMSLOT(0)); vm_vcpu_add(vm, vcpuid); aarch64_vcpu_setup(vm, vcpuid, init); @@ -426,7 +427,7 @@ void route_exception(struct ex_regs *regs, int vector) void vm_init_descriptor_tables(struct kvm_vm *vm) { vm->handlers = vm_vaddr_alloc(vm, sizeof(struct handlers), - vm->page_size); + vm->page_size, MEMSLOT(0)); *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; } diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c index 13e8e3dcf984..88d03cb80423 100644 --- a/tools/testing/selftests/kvm/lib/elf.c +++ b/tools/testing/selftests/kvm/lib/elf.c @@ -162,7 +162,8 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) seg_vend |= vm->page_size - 1; size_t seg_size = seg_vend - seg_vstart + 1; - vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart); + vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart, + MEMSLOT(0)); TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate " "virtual memory for segment at requested min addr,\n" " segment idx: %u\n" diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 97d1badaba8b..04abfc7e6b5c 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1340,8 +1340,7 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, * vm - Virtual Machine * sz - Size in bytes * vaddr_min - Minimum starting virtual address - * data_memslot - Memory region slot for data pages - * pgd_memslot - Memory region slot for new virtual translation tables + * memslot - Memory region slot for data pages * * Output Args: None * @@ -1354,14 +1353,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, * a unique set of pages, with the minimum real allocation being at least * a page. */ -vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + struct kvm_memslot memslot) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); virt_pgd_alloc(vm); vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages, KVM_UTIL_MIN_PFN * vm->page_size, - MEMSLOT(0)); + memslot); /* * Find an unused range of virtual page addresses of at least @@ -1396,9 +1396,10 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) * Allocates at least N system pages worth of bytes within the virtual address * space of the vm. */ -vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages) +vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages, + struct kvm_memslot memslot) { - return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR); + return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR, memslot); } /* @@ -1415,9 +1416,9 @@ vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages) * Allocates at least one system page worth of bytes within the virtual address * space of the vm. */ -vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm) +vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm, struct kvm_memslot memslot) { - return vm_vaddr_alloc_pages(vm, 1); + return vm_vaddr_alloc_pages(vm, 1, memslot); } /* diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index 7a0ff26b9f8d..9b554d6939a5 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -281,7 +281,8 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code) DEFAULT_STACK_PGS * vm->page_size : vm->page_size; unsigned long stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_RISCV_GUEST_STACK_VADDR_MIN); + DEFAULT_RISCV_GUEST_STACK_VADDR_MIN, + MEMSLOT(0)); unsigned long current_gp = 0; struct kvm_mp_state mps; diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c index 1c873a26e6de..edcba350dbef 100644 --- a/tools/testing/selftests/kvm/lib/s390x/processor.c +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c @@ -169,7 +169,8 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code) vm->page_size); stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_GUEST_STACK_VADDR_MIN); + DEFAULT_GUEST_STACK_VADDR_MIN, + MEMSLOT(0)); vm_vcpu_add(vm, vcpuid); diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..afcc13655790 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -597,7 +597,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt) { if (!vm->gdt) - vm->gdt = vm_vaddr_alloc_page(vm); + vm->gdt = vm_vaddr_alloc_page(vm, MEMSLOT(0)); dt->base = vm->gdt; dt->limit = getpagesize(); @@ -607,7 +607,7 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp, int selector) { if (!vm->tss) - vm->tss = vm_vaddr_alloc_page(vm); + vm->tss = vm_vaddr_alloc_page(vm, MEMSLOT(0)); memset(segp, 0, sizeof(*segp)); segp->base = vm->tss; @@ -710,7 +710,8 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code) struct kvm_regs regs; vm_vaddr_t stack_vaddr; stack_vaddr = vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(), - DEFAULT_GUEST_STACK_VADDR_MIN); + DEFAULT_GUEST_STACK_VADDR_MIN, + MEMSLOT(0)); /* Create VCPU */ vm_vcpu_add(vm, vcpuid); @@ -1377,8 +1378,8 @@ void vm_init_descriptor_tables(struct kvm_vm *vm) extern void *idt_handlers; int i; - vm->idt = vm_vaddr_alloc_page(vm); - vm->handlers = vm_vaddr_alloc_page(vm); + vm->idt = vm_vaddr_alloc_page(vm, MEMSLOT(0)); + vm->handlers = vm_vaddr_alloc_page(vm, MEMSLOT(0)); /* Handlers have the same address in both address spaces.*/ for (i = 0; i < NUM_INTERRUPTS; i++) set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c index 736ee4a23df6..6d935cc1225a 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c @@ -32,18 +32,18 @@ u64 rflags; struct svm_test_data * vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva) { - vm_vaddr_t svm_gva = vm_vaddr_alloc_page(vm); + vm_vaddr_t svm_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0)); struct svm_test_data *svm = addr_gva2hva(vm, svm_gva); - svm->vmcb = (void *)vm_vaddr_alloc_page(vm); + svm->vmcb = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); svm->vmcb_hva = addr_gva2hva(vm, (uintptr_t)svm->vmcb); svm->vmcb_gpa = addr_gva2gpa(vm, (uintptr_t)svm->vmcb); - svm->save_area = (void *)vm_vaddr_alloc_page(vm); + svm->save_area = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); svm->save_area_hva = addr_gva2hva(vm, (uintptr_t)svm->save_area); svm->save_area_gpa = addr_gva2gpa(vm, (uintptr_t)svm->save_area); - svm->msr = (void *)vm_vaddr_alloc_page(vm); + svm->msr = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); svm->msr_hva = addr_gva2hva(vm, (uintptr_t)svm->msr); svm->msr_gpa = addr_gva2gpa(vm, (uintptr_t)svm->msr); memset(svm->msr_hva, 0, getpagesize()); diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index 7ea9455b3e71..3678969e992a 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -77,48 +77,48 @@ int vcpu_enable_evmcs(struct kvm_vm *vm, int vcpu_id) struct vmx_pages * vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva) { - vm_vaddr_t vmx_gva = vm_vaddr_alloc_page(vm); + vm_vaddr_t vmx_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0)); struct vmx_pages *vmx = addr_gva2hva(vm, vmx_gva); /* Setup of a region of guest memory for the vmxon region. */ - vmx->vmxon = (void *)vm_vaddr_alloc_page(vm); + vmx->vmxon = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->vmxon_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmxon); vmx->vmxon_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmxon); /* Setup of a region of guest memory for a vmcs. */ - vmx->vmcs = (void *)vm_vaddr_alloc_page(vm); + vmx->vmcs = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmcs); vmx->vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmcs); /* Setup of a region of guest memory for the MSR bitmap. */ - vmx->msr = (void *)vm_vaddr_alloc_page(vm); + vmx->msr = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->msr_hva = addr_gva2hva(vm, (uintptr_t)vmx->msr); vmx->msr_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->msr); memset(vmx->msr_hva, 0, getpagesize()); /* Setup of a region of guest memory for the shadow VMCS. */ - vmx->shadow_vmcs = (void *)vm_vaddr_alloc_page(vm); + vmx->shadow_vmcs = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->shadow_vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->shadow_vmcs); vmx->shadow_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->shadow_vmcs); /* Setup of a region of guest memory for the VMREAD and VMWRITE bitmaps. */ - vmx->vmread = (void *)vm_vaddr_alloc_page(vm); + vmx->vmread = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->vmread_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmread); vmx->vmread_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmread); memset(vmx->vmread_hva, 0, getpagesize()); - vmx->vmwrite = (void *)vm_vaddr_alloc_page(vm); + vmx->vmwrite = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->vmwrite_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmwrite); vmx->vmwrite_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmwrite); memset(vmx->vmwrite_hva, 0, getpagesize()); /* Setup of a region of guest memory for the VP Assist page. */ - vmx->vp_assist = (void *)vm_vaddr_alloc_page(vm); + vmx->vp_assist = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->vp_assist_hva = addr_gva2hva(vm, (uintptr_t)vmx->vp_assist); vmx->vp_assist_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vp_assist); /* Setup of a region of guest memory for the enlightened VMCS. */ - vmx->enlightened_vmcs = (void *)vm_vaddr_alloc_page(vm); + vmx->enlightened_vmcs = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->enlightened_vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->enlightened_vmcs); vmx->enlightened_vmcs_gpa = @@ -528,14 +528,14 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, struct kvm_memslot eptp_memslot) { - vmx->eptp = (void *)vm_vaddr_alloc_page(vm); + vmx->eptp = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->eptp_hva = addr_gva2hva(vm, (uintptr_t)vmx->eptp); vmx->eptp_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->eptp); } void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm) { - vmx->apic_access = (void *)vm_vaddr_alloc_page(vm); + vmx->apic_access = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0)); vmx->apic_access_hva = addr_gva2hva(vm, (uintptr_t)vmx->apic_access); vmx->apic_access_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->apic_access); } diff --git a/tools/testing/selftests/kvm/x86_64/amx_test.c b/tools/testing/selftests/kvm/x86_64/amx_test.c index 52a3ef6629e8..2c12a3bf1f62 100644 --- a/tools/testing/selftests/kvm/x86_64/amx_test.c +++ b/tools/testing/selftests/kvm/x86_64/amx_test.c @@ -360,15 +360,15 @@ int main(int argc, char *argv[]) vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler); /* amx cfg for guest_code */ - amx_cfg = vm_vaddr_alloc_page(vm); + amx_cfg = vm_vaddr_alloc_page(vm, MEMSLOT(0)); memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize()); /* amx tiledata for guest_code */ - tiledata = vm_vaddr_alloc_pages(vm, 2); + tiledata = vm_vaddr_alloc_pages(vm, 2, MEMSLOT(0)); memset(addr_gva2hva(vm, tiledata), rand() | 1, 2 * getpagesize()); /* xsave data for guest_code */ - xsavedata = vm_vaddr_alloc_pages(vm, 3); + xsavedata = vm_vaddr_alloc_pages(vm, 3, MEMSLOT(0)); memset(addr_gva2hva(vm, xsavedata), 0, 3 * getpagesize()); vcpu_args_set(vm, VCPU_ID, 3, amx_cfg, tiledata, xsavedata); diff --git a/tools/testing/selftests/kvm/x86_64/cpuid_test.c b/tools/testing/selftests/kvm/x86_64/cpuid_test.c index 16d2465c5634..d0250f32d729 100644 --- a/tools/testing/selftests/kvm/x86_64/cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/cpuid_test.c @@ -145,7 +145,7 @@ static void run_vcpu(struct kvm_vm *vm, uint32_t vcpuid, int stage) struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid) { int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]); - vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR); + vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR, MEMSLOT(0)); struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva); memcpy(guest_cpuids, cpuid, size); diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c index e0b2bb1339b1..8cc31ce181a0 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c @@ -214,7 +214,7 @@ int main(void) vcpu_set_hv_cpuid(vm, VCPU_ID); - tsc_page_gva = vm_vaddr_alloc_page(vm); + tsc_page_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0)); memset(addr_gva2hva(vm, tsc_page_gva), 0x0, getpagesize()); TEST_ASSERT((addr_gva2gpa(vm, tsc_page_gva) & (getpagesize() - 1)) == 0, "TSC page has to be page aligned\n"); diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c index 672915ce73d8..64cbb2cabcda 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c @@ -191,7 +191,7 @@ static void guest_test_msrs_access(void) while (true) { vm = vm_create_default(VCPU_ID, 0, guest_msr); - msr_gva = vm_vaddr_alloc_page(vm); + msr_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0)); memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize()); msr = addr_gva2hva(vm, msr_gva); @@ -534,11 +534,11 @@ static void guest_test_hcalls_access(void) vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler); /* Hypercall input/output */ - hcall_page = vm_vaddr_alloc_pages(vm, 2); + hcall_page = vm_vaddr_alloc_pages(vm, 2, MEMSLOT(0)); hcall = addr_gva2hva(vm, hcall_page); memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize()); - hcall_params = vm_vaddr_alloc_page(vm); + hcall_params = vm_vaddr_alloc_page(vm, MEMSLOT(0)); memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize()); vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params); diff --git a/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c b/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c index 97731454f3f3..c0d3bc5a1e7d 100644 --- a/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c @@ -194,7 +194,7 @@ int main(void) vm = vm_create_default(VCPU_ID, 0, guest_main); - pvti_gva = vm_vaddr_alloc(vm, getpagesize(), 0x10000); + pvti_gva = vm_vaddr_alloc(vm, getpagesize(), 0x10000, MEMSLOT(0)); pvti_gpa = addr_gva2gpa(vm, pvti_gva); vcpu_args_set(vm, VCPU_ID, 2, pvti_gpa, pvti_gva); diff --git a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c index afbbc40df884..ffb92f304302 100644 --- a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c +++ b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c @@ -427,7 +427,7 @@ int main(int argc, char *argv[]) vm_vcpu_add_default(vm, SENDER_VCPU_ID, sender_guest_code); - test_data_page_vaddr = vm_vaddr_alloc_page(vm); + test_data_page_vaddr = vm_vaddr_alloc_page(vm, MEMSLOT(0)); data = (struct test_data_page *)addr_gva2hva(vm, test_data_page_vaddr); memset(data, 0, sizeof(*data)); From patchwork Thu Mar 10 16:45:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AF8AC433EF for ; Thu, 10 Mar 2022 16:46:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243881AbiCJQrQ (ORCPT ); Thu, 10 Mar 2022 11:47:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243713AbiCJQrD (ORCPT ); Thu, 10 Mar 2022 11:47:03 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FDA2166E11 for ; Thu, 10 Mar 2022 08:46:00 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id d13-20020a170902b70d00b0015317d9f08bso1664107pls.1 for ; Thu, 10 Mar 2022 08:46:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HE/O3qBQQfnPa9lv9g/YI8j6lgpbZMJt6yutRPg7j9Q=; b=gjhVEZBWgspIP5DNAIFWgNX2IvJP8JGW6Yct9xx1sK/5MdGUB4jGY92bvVLTZN1y5/ y2qddS+NrTMk2I8fBC5P0IM0qmEkxgHyiXeSz0NjvBxa/CyY5LDaV/ixSOgel/gy242A fmzJqK6XATiUeGaaQNnNhn0e7vFQzhuM9l7KUcz6nkqzfB9+T1WRYr+HWeVZWD/ARc6P Nixg/hw6f18RKJceYFF+iha5C+WICFcj3ZTuOFJ6hwqLrNg8Fs4VIs3uoRymkWHv20H7 5ppVl1COVwNvGfSaHO/AEKOxciOnn3wN7P2wIlsM93lmJ0z0QB0NsxyiFU9z51BHA+MF cb/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HE/O3qBQQfnPa9lv9g/YI8j6lgpbZMJt6yutRPg7j9Q=; b=HOUkuVwy2Zr5+H4uVpcbYWcPjbpOTrPYOYRJExWEFrJ25LK66/8AtKWi6rLtBwxdpH wJJBY3h9c9xav2wkeDxEZmi2axmXPE37bnolQCYdx+6yLBokFrVcHtR+7ziZGaW+JIGe HtlkPNJUQi8HB4lN2tsiy8mwdoOf3B9qy7WhjzTkUexslnVgG/BO3abmp9/S+QMhdfmF h4CWrS2Al7/eD9oJVuAe4rkRYOWWCPfDMTnaH3KISojStpqSmUtMoAzpTNOX0Gw6cKWE 6u34Wy5o+n83RZr3krxkoYRxvpbmiDy6VNt0SMeWFwp+sjMumBHRFAdce3r7JFUm3n5U GSuw== X-Gm-Message-State: AOAM532yraN7eUqOg7vMtPL9b/kI+RzsYh4JZsb+IMnjsnURFcCaWeXS P7sjnC7A+etHHEyDVX6ZlBAX087ycTO0 X-Google-Smtp-Source: ABdhPJyP2U6eXcpKoU609bxDuH9KIVdIW7MM6n5M6Llu2IiV8JWI/DEzXvpus840lRSgOzEOBsPWl1gFn32c X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a62:5251:0:b0:4f6:ff68:50c2 with SMTP id g78-20020a625251000000b004f6ff6850c2mr5592924pfb.63.1646930759610; Thu, 10 Mar 2022 08:45:59 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:24 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-6-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 05/13] selftests: KVM: Add memslot parameter to elf_load From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently elf_load loads code into memslot 0. Add a parameter to allow loading code into any memslot. This will be useful for backing code pages with huge pages in future commits. No functional change intended. Signed-off-by: Ben Gardon --- tools/testing/selftests/kvm/aarch64/psci_cpu_on_test.c | 2 +- tools/testing/selftests/kvm/dirty_log_test.c | 2 +- tools/testing/selftests/kvm/hardware_disable_test.c | 2 +- tools/testing/selftests/kvm/include/kvm_util_base.h | 3 ++- tools/testing/selftests/kvm/lib/elf.c | 4 ++-- tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c | 2 +- 7 files changed, 9 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/psci_cpu_on_test.c b/tools/testing/selftests/kvm/aarch64/psci_cpu_on_test.c index 4c5f6814030f..c3d5227a740e 100644 --- a/tools/testing/selftests/kvm/aarch64/psci_cpu_on_test.c +++ b/tools/testing/selftests/kvm/aarch64/psci_cpu_on_test.c @@ -77,7 +77,7 @@ int main(void) struct ucall uc; vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); - kvm_vm_elf_load(vm, program_invocation_name); + kvm_vm_elf_load(vm, program_invocation_name, MEMSLOT(0)); ucall_init(vm, NULL); vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 1241c9a2729c..fe1054897ee2 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -686,7 +686,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); - kvm_vm_elf_load(vm, program_invocation_name); + kvm_vm_elf_load(vm, program_invocation_name, MEMSLOT(0)); #ifdef __x86_64__ vm_create_irqchip(vm); #endif diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c b/tools/testing/selftests/kvm/hardware_disable_test.c index b21c69a56daa..4f699781e065 100644 --- a/tools/testing/selftests/kvm/hardware_disable_test.c +++ b/tools/testing/selftests/kvm/hardware_disable_test.c @@ -105,7 +105,7 @@ static void run_test(uint32_t run) CPU_SET(i, &cpu_set); vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); - kvm_vm_elf_load(vm, program_invocation_name); + kvm_vm_elf_load(vm, program_invocation_name, MEMSLOT(0)); vm_create_irqchip(vm); pr_debug("%s: [%d] start vcpus\n", __func__, run); diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index f70dfa3e1202..530b5272fae2 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -131,7 +131,8 @@ uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm); int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva, size_t len); -void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename); +void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename, + struct kvm_memslot memslot); int kvm_memfd_alloc(size_t size, bool hugepages); void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent); diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c index 88d03cb80423..763f489492f1 100644 --- a/tools/testing/selftests/kvm/lib/elf.c +++ b/tools/testing/selftests/kvm/lib/elf.c @@ -111,7 +111,7 @@ static void elfhdr_get(const char *filename, Elf64_Ehdr *hdrp) * by the image and it needs to have sufficient available physical pages, to * back the virtual pages used to load the image. */ -void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) +void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename, struct kvm_memslot memslot) { off_t offset, offset_rv; Elf64_Ehdr hdr; @@ -163,7 +163,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) size_t seg_size = seg_vend - seg_vstart + 1; vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart, - MEMSLOT(0)); + memslot); TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate " "virtual memory for segment at requested min addr,\n" " segment idx: %u\n" diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 04abfc7e6b5c..a10bee651191 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -368,7 +368,7 @@ struct kvm_vm *vm_create_without_vcpus(enum vm_guest_mode mode, uint64_t pages) vm = vm_create(mode, pages, O_RDWR); - kvm_vm_elf_load(vm, program_invocation_name); + kvm_vm_elf_load(vm, program_invocation_name, MEMSLOT(0)); #ifdef __x86_64__ vm_create_irqchip(vm); diff --git a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c b/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c index ae76436af0cc..3a7783046895 100644 --- a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c +++ b/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c @@ -90,7 +90,7 @@ static struct kvm_vm *create_vm(void) pages = vm_adjust_num_guest_pages(VM_MODE_DEFAULT, pages); vm = vm_create(VM_MODE_DEFAULT, pages, O_RDWR); - kvm_vm_elf_load(vm, program_invocation_name); + kvm_vm_elf_load(vm, program_invocation_name, MEMSLOT(0)); vm_create_irqchip(vm); return vm; From patchwork Thu Mar 10 16:45:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79597C433EF for ; Thu, 10 Mar 2022 16:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243624AbiCJQra (ORCPT ); Thu, 10 Mar 2022 11:47:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241490AbiCJQrM (ORCPT ); Thu, 10 Mar 2022 11:47:12 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F9C71688C0 for ; Thu, 10 Mar 2022 08:46:03 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id j1-20020a170903028100b0014b1f9e0068so2976885plr.8 for ; Thu, 10 Mar 2022 08:46:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZtMt2wyNKjtAiC88Q9bMZ9sXYSFZv/d9rVnBJi58YjE=; b=kF1znJ85tnfKN1AqdnzwfECAQ2Bujq/sYB5jsXv9223OEp8oLI7a4GjXMMqiXdJKwt XHQV3yyoMv/foJR1ICm6OepAwP6nQRV8LErs/I5gGumDWU9qzwjTbRTW9HCR0PWgqlB6 x8vNVZLqVGy4eHLzBawQNrARD/WxTyRhjgI0bjLQqJrLR6X/F3L+HVwfArLVEC7cniWG 7SH/D9MddLJkIEnASvtVy9Mo6CLqaZoFYjpXIH5hBRa/UrxhjeqnwXd0mw+5K2HBtRrx 7giUtfK0uwxO9EitJUz30MABR9jtP1s0OzhBmWmD+N8hOkBUrnF9WOvqqhVh0Gts5EHk VFbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZtMt2wyNKjtAiC88Q9bMZ9sXYSFZv/d9rVnBJi58YjE=; b=X4/0/Y6l71IskZ1RIHQPTDMiexE9688Q+tD42bRofrW9I0AAmy9TXXJzJ5YQF/7QnA m35Kh0Ck/droECo+udUYmUgT+TMSwUagvsmxVijwCkGc4Xpz+LR1jxNYmce1HOIy2wGn l5sO1gf5v71TgspLArJEmUs3izbnCx6UFLDH15lU7rx23yAvhkEPulTjDzHrJSUwmD3j MF3yvYGZCjQi8PktXIdw1ECRXNVRPKHiwN8SWZtSGY/Vpm65e7+nls1X05i08dVDKzKB 2OEGJbHrQuYbQ3hOFszSHHXW5Wd4FiEi8H/b5ye64oOhHw7SU8cK8YDBBiB/S9U38Mt8 39Xw== X-Gm-Message-State: AOAM532BwK0uTh1B4LJ4w9BFIhRwvXxAS4GWVfkrS4ShypVFAgSB63Pq 8dtV2MqywhdWtm3fttg/dTEIGiQU4Nak X-Google-Smtp-Source: ABdhPJyoHfbF8RQ9sOUvgKO7WeX+jxEC0rLnHPaqY5crXjjYxoTQBF0iZZpOSN7klvCidjTmaT7hiCOEyKeL X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a05:6a00:2296:b0:4e1:905f:46b6 with SMTP id f22-20020a056a00229600b004e1905f46b6mr5914762pfe.16.1646930762732; Thu, 10 Mar 2022 08:46:02 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:25 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-7-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 06/13] selftests: KVM: Improve error message in vm_phy_pages_alloc From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Make an error message in vm_phy_pages_alloc more specific, and log the number of pages requested in the allocation. Signed-off-by: Ben Gardon --- tools/testing/selftests/kvm/lib/kvm_util.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index a10bee651191..f9591dad1010 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2408,9 +2408,10 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, } while (pg && pg != base + num); if (pg == 0) { - fprintf(stderr, "No guest physical page available, " + fprintf(stderr, + "Unable to find %ld contiguous guest physical pages. " "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", - paddr_min, vm->page_size, memslot.id); + num, paddr_min, vm->page_size, memslot.id); fputs("---- vm dump ----\n", stderr); vm_dump(stderr, vm, 2); abort(); From patchwork Thu Mar 10 16:45:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EA57C433F5 for ; Thu, 10 Mar 2022 16:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230286AbiCJQre (ORCPT ); Thu, 10 Mar 2022 11:47:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244005AbiCJQr0 (ORCPT ); Thu, 10 Mar 2022 11:47:26 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 179A7192E2D for ; Thu, 10 Mar 2022 08:46:06 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id y3-20020a1709029b8300b0014c8bcb70a1so2992689plp.3 for ; Thu, 10 Mar 2022 08:46:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fbxWpZEa6eY/UYN831/It4hPjXSSXrSY82CYIn9xG4k=; b=Va2QgQuftz6KeYn+NlohEVeWMRJNDL0ugIP1b+C9U3Y8BFpEEHWDfng5rES6HHZLOX vMH7Egye4Q+YUkyTrSZWRSrW7mxmJrFdlVfyWeiUgDEa41fqBHapehDrsTlJCG5ODnai EPQ04Bw60k1WvIyDg2ws80xj5/Ly0/4qnrFuXgx+Zre/9eqncifndEPIJj/8xKi0nBOg E1dlQr+uuFm4cRzpo49sDK6OoP/1CVNlVSV0LEBMJkZQD9ZbVedzSAA/YHkBEYHC5FUA JLAwbQsquD47DVpGdhkP34ch3hQ4bTVO3jkZyiNGoPrPgrsJpSH5joz2+mtrnOcmA1CO beHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fbxWpZEa6eY/UYN831/It4hPjXSSXrSY82CYIn9xG4k=; b=civOQoOBNHeqOHo9AmLQZehMIjgzPi4X5X+Oi9AX4DXe0Nzt4iIVBMMatzmJdc6tJP gO406iI1zdpeIIL4ORwOu3b/I6iP7I3dPpUMaHRcms/UAIQNpF93aUxNvIcqF+3SQNbw JKpIKGOiAfh+vPBRYKQD2Dvnx3eydW8z8Z9R3rgA1/MJ1AkDmc5oB1o3mCtGlCjF1NHh exusD8uF7TRFfi6KrlOVsRqVEwCaXFssCo0zYTNmHrf6D/G1ipfnPfEYHwMK0t/cJjce j28/J3XcawrNGNC/kBeVVHgbiOGvkGnSi88or3v7SvYA+kCn18gmaU2m5jbgo4y4mUc1 L8Ag== X-Gm-Message-State: AOAM533yckm8fm9vGsqKm6ZqfpS34QBiiJ+OVkHzGiG1bpUfOI6bapqL XKOhEqegvse+AADL0+t6arRaUSDSPwI3 X-Google-Smtp-Source: ABdhPJxNF8wOqc7sqaSmZcCBX9Tn2IXOfjzmrlDs/E6jF56PwaHYESssAe/kFm9XHLtJR98dk1VRmGRxquDe X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a63:715:0:b0:380:cf1d:6765 with SMTP id 21-20020a630715000000b00380cf1d6765mr4732500pgh.577.1646930765503; Thu, 10 Mar 2022 08:46:05 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:26 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-8-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 07/13] selftests: KVM: Add NX huge pages test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There's currently no test coverage of NX hugepages in KVM selftests, so add a basic test to ensure that the feature works as intended. Signed-off-by: Ben Gardon --- tools/testing/selftests/kvm/Makefile | 3 +- .../kvm/lib/x86_64/nx_huge_pages_guest.S | 45 +++++++ .../selftests/kvm/x86_64/nx_huge_pages_test.c | 122 ++++++++++++++++++ .../kvm/x86_64/nx_huge_pages_test.sh | 25 ++++ 4 files changed, 194 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S create mode 100644 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c create mode 100755 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 04099f453b59..6ee30c0df323 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -38,7 +38,7 @@ ifeq ($(ARCH),riscv) endif LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c -LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S +LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S lib/x86_64/nx_huge_pages_guest.S LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c LIBKVM_riscv = lib/riscv/processor.c lib/riscv/ucall.c @@ -56,6 +56,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/kvm_clock_test TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test TEST_GEN_PROGS_x86_64 += x86_64/mmu_role_test +TEST_GEN_PROGS_x86_64 += x86_64/nx_huge_pages_test TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id diff --git a/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S b/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S new file mode 100644 index 000000000000..09c66b9562a3 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * tools/testing/selftests/kvm/nx_huge_page_guest.S + * + * Copyright (C) 2022, Google LLC. + */ + +.include "kvm_util.h" + +#define HPAGE_SIZE (2*1024*1024) +#define PORT_SUCCESS 0x70 + +.global guest_code0 +.global guest_code1 + +.align HPAGE_SIZE +exit_vm: + mov $0x1,%edi + mov $0x2,%esi + mov a_string,%edx + mov $0x1,%ecx + xor %eax,%eax + jmp ucall + + +guest_code0: + mov data1, %eax + mov data2, %eax + jmp exit_vm + +.align HPAGE_SIZE +guest_code1: + mov data1, %eax + mov data2, %eax + jmp exit_vm +data1: +.quad 0 + +.align HPAGE_SIZE +data2: +.quad 0 +a_string: +.string "why does the ucall function take a string argument?" + + diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c new file mode 100644 index 000000000000..5cbcc777d0ab --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c @@ -0,0 +1,122 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * tools/testing/selftests/kvm/nx_huge_page_test.c + * + * Usage: to be run via nx_huge_page_test.sh, which does the necessary + * environment setup and teardown + * + * Copyright (C) 2022, Google LLC. + */ + +#define _GNU_SOURCE + +#include +#include + +#include +#include "kvm_util.h" + +#define HPAGE_SLOT MEMSLOT(10) +#define HPAGE_PADDR_START (10*1024*1024) +#define HPAGE_SLOT_NPAGES (100*1024*1024/4096) + +/* Defined in nx_huge_page_guest.S */ +void guest_code0(void); +void guest_code1(void); + +static void run_guest_code(struct kvm_vm *vm, void (*guest_code)(void)) +{ + struct kvm_regs regs; + + vcpu_regs_get(vm, 0, ®s); + regs.rip = (uint64_t)guest_code; + vcpu_regs_set(vm, 0, ®s); + vcpu_run(vm, 0); +} + +static void check_2m_page_count(struct kvm_vm *vm, int expected_pages_2m) +{ + int actual_pages_2m; + + actual_pages_2m = vm_get_single_stat(vm, "pages_2m"); + + TEST_ASSERT(actual_pages_2m == expected_pages_2m, + "Unexpected 2m page count. Expected %d, got %d", + expected_pages_2m, actual_pages_2m); +} + +static void check_split_count(struct kvm_vm *vm, int expected_splits) +{ + int actual_splits; + + actual_splits = vm_get_single_stat(vm, "nx_lpage_splits"); + + TEST_ASSERT(actual_splits == expected_splits, + "Unexpected nx lpage split count. Expected %d, got %d", + expected_splits, actual_splits); +} + +int main(int argc, char **argv) +{ + struct kvm_vm *vm; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB, + HPAGE_PADDR_START, HPAGE_SLOT, + HPAGE_SLOT_NPAGES, 0); + + kvm_vm_elf_load(vm, program_invocation_name, HPAGE_SLOT); + + vm_vcpu_add_default(vm, 0, guest_code0); + + check_2m_page_count(vm, 0); + check_split_count(vm, 0); + + /* + * Running guest_code0 will access data1 and data2. + * This should result in part of the huge page containing guest_code0, + * and part of the hugepage containing the ucall function being mapped + * at 4K. The huge pages containing data1 and data2 will be mapped + * at 2M. + */ + run_guest_code(vm, guest_code0); + check_2m_page_count(vm, 2); + check_split_count(vm, 2); + + /* + * guest_code1 is in the same huge page as data1, so it will cause + * that huge page to be remapped at 4k. + */ + run_guest_code(vm, guest_code1); + check_2m_page_count(vm, 1); + check_split_count(vm, 3); + + /* Run guest_code0 again to check that is has no effect. */ + run_guest_code(vm, guest_code0); + check_2m_page_count(vm, 1); + check_split_count(vm, 3); + + /* Give recovery thread time to run */ + sleep(3); + check_2m_page_count(vm, 1); + check_split_count(vm, 0); + + /* + * The split 2M pages should have been reclaimed, so run guest_code0 + * again to check that pages are mapped at 2M again. + */ + run_guest_code(vm, guest_code0); + check_2m_page_count(vm, 2); + check_split_count(vm, 2); + + /* Pages are once again split from running guest_code1. */ + run_guest_code(vm, guest_code1); + check_2m_page_count(vm, 1); + check_split_count(vm, 3); + + kvm_vm_free(vm); + + return 0; +} + diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh new file mode 100755 index 000000000000..a5f946fb0626 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh @@ -0,0 +1,25 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0-only */ + +# tools/testing/selftests/kvm/nx_huge_page_test.sh +# Copyright (C) 2022, Google LLC. + +NX_HUGE_PAGES=$(cat /sys/module/kvm/parameters/nx_huge_pages) +NX_HUGE_PAGES_RECOVERY_RATIO=$(cat /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio) +NX_HUGE_PAGES_RECOVERY_PERIOD=$(cat /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms) +HUGE_PAGES=$(cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages) + +echo 1 > /sys/module/kvm/parameters/nx_huge_pages +echo 1 > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio +echo 2 > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms +echo 200 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + +./nx_huge_pages_test +RET=$? + +echo $NX_HUGE_PAGES > /sys/module/kvm/parameters/nx_huge_pages +echo $NX_HUGE_PAGES_RECOVERY_RATIO > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio +echo $NX_HUGE_PAGES_RECOVERY_PERIOD > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms +echo $HUGE_PAGES > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + +exit $RET From patchwork Thu Mar 10 16:45:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F1CCC433FE for ; Thu, 10 Mar 2022 16:48:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243999AbiCJQtr (ORCPT ); Thu, 10 Mar 2022 11:49:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244023AbiCJQr0 (ORCPT ); Thu, 10 Mar 2022 11:47:26 -0500 Received: from mail-oi1-x24a.google.com (mail-oi1-x24a.google.com [IPv6:2607:f8b0:4864:20::24a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 152871959CC for ; Thu, 10 Mar 2022 08:46:09 -0800 (PST) Received: by mail-oi1-x24a.google.com with SMTP id j1-20020a544801000000b002d9c824806dso3971076oij.11 for ; Thu, 10 Mar 2022 08:46:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2SsVLXVD0edRV3f8AavPnanFB3lHcUM5wj5R9N73Z6I=; b=X9PzltWeS/MgIUKPhKte0phhcvDP5rVVdeax0erbewyCCfBdsOHibEi8fYzoBxOm3n lquV3ZPS/bVpo7VWqKOcfwc13uwlPzt91z3H1Kj49soWFdKlSNwsoMSvnYuOz2apZ7Gg EC2Ic2idfwqNUZ54mzCbgskTaxB3j+bRdfIVKl2A7MUBqExVDssZmRBk8ES7vUP0QFbY 5/jzj6vTwQdu8djATC+kxeNOVmHbjZHAA2KqsOOaPh8sABbO2NZDS7dRsTjf7wfNuHhX t2C8/xukDUdkokU00lehIxNdhwNgxpp8GDxwFe/LdvzyIdEbRN5uU8V87SHo8zWzaqYR 8Gog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2SsVLXVD0edRV3f8AavPnanFB3lHcUM5wj5R9N73Z6I=; b=gkGSd1fAFHmIxQUr1+5IDvr9/NqiOtvXFrIZqkV9siBuILlA1q40m+c9PFxQ8U8533 jc8unblXLz7ZFEO0TuEO0Wr1Oy8afPdkM/7hXm21+EsCHmc2ImasXgRQuFmILeUMFxQo NGBq6IZ749+ZUH1aSg4PHbqJnR6pUM5SsvUrrH4S583LyYb6Vh3gWyo99kvr655C5aC1 PQXIj7Er1BTjhwavMc6WFc6MJ15nWl/dpnl04G3KbjV3h6YmpaOqB3u/c5Pfkfe2n2IG dbGtOw2U/Obn3xmXn8hC+kJTZt/LkkHaIA6V3Kf6sPqeVA3JL7VDMVH85Pm3rSerCdnB MceA== X-Gm-Message-State: AOAM533/2WrT48vmNwQ0n266vBSWlngYL+YFyptOd6/5HXCeX4eySRqS bS05mSbdJ63l5k62esw1oHYV8Tn/rPu2 X-Google-Smtp-Source: ABdhPJyQ3c/S0YI4mDS1HFjedf5Jv4X8SXDIvM1RCZxBxsVwayoIztg/KISrQ29u14LwkKCgTMjC0AJOgmN3 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a05:6870:4149:b0:da:5e34:9448 with SMTP id r9-20020a056870414900b000da5e349448mr9042633oad.112.1646930768289; Thu, 10 Mar 2022 08:46:08 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:27 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-9-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 08/13] KVM: x86/MMU: Factor out updating NX hugepages state for a VM From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out the code to update the NX hugepages state for an individual VM. This will be expanded in future commits to allow per-VM control of Nx hugepages. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3b8da8b0745e..1b59b56642f1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6195,6 +6195,15 @@ static void __set_nx_huge_pages(bool val) nx_huge_pages = itlb_multihit_kvm_mitigation = val; } +static int kvm_update_nx_huge_pages(struct kvm *kvm) +{ + mutex_lock(&kvm->slots_lock); + kvm_mmu_zap_all_fast(kvm); + mutex_unlock(&kvm->slots_lock); + + wake_up_process(kvm->arch.nx_lpage_recovery_thread); +} + static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) { bool old_val = nx_huge_pages; @@ -6217,13 +6226,8 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) mutex_lock(&kvm_lock); - list_for_each_entry(kvm, &vm_list, vm_list) { - mutex_lock(&kvm->slots_lock); - kvm_mmu_zap_all_fast(kvm); - mutex_unlock(&kvm->slots_lock); - - wake_up_process(kvm->arch.nx_lpage_recovery_thread); - } + list_for_each_entry(kvm, &vm_list, vm_list) + kvm_set_nx_huge_pages(kvm); mutex_unlock(&kvm_lock); } From patchwork Thu Mar 10 16:45:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C400C43217 for ; Thu, 10 Mar 2022 16:47:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240521AbiCJQs0 (ORCPT ); Thu, 10 Mar 2022 11:48:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244053AbiCJQr2 (ORCPT ); Thu, 10 Mar 2022 11:47:28 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE5C2197B5F for ; Thu, 10 Mar 2022 08:46:11 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id i7-20020a170902cf0700b0015163eb319eso2960373plg.18 for ; Thu, 10 Mar 2022 08:46:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QUlArJveCRgrV8yQIechzTfu/4iHpeBxIkK2uo+sZuA=; b=LnOeHbKi0cb+Q3VGsrUfjLvwZt9hNNVe3BYX7LnL3PXDAKopUVn2ntP7bztEjzlvMN /DqhECIU4G/aJ0fkpTz1aVwF4oNbO47Q2i02K+o3m9n0N+/xc8rkYV7yqYZFaSr1XGw5 H8ZFMdchzFyir7AyO0/5ybRuZyV+nE4C0IY1rX4dRfbXfRO0fvH2YwwGHLYYYw3WVDzg GsA2pMe4pGD9lTskpj9HhFFvUsQCaLnTobWShglV+Qie0wjwgiEpXCpwkqdYSuPH+eFL 2RfM3sXE17QNFEhKdLyZ8+7A2xFH+BD/hoxGIifu0BWy/OiKnUEnEz8b6qTb1B94tuFm pgzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QUlArJveCRgrV8yQIechzTfu/4iHpeBxIkK2uo+sZuA=; b=44eGpybti/QfLuJJqnZS83FNGofBHdomZawgYkY3X5ot1ReUOrz3CoBO5l2S9bM1hr FT38mb7UOrTApJ1skTxsVi7z9FWZO5t59WkPmSOx4ycAXkM7SzyiYOM3w+8+eduq0Tdg 3naCvCLNAeU7ovGvOhNRZzV5mE2A3XGbAoptFPO46Tk5wZrNP08a0vy/J+9qzjsc9CKm tChLlpvZ1aUNMJ+SKmr+RGapYX40Nrr/SmrO9JFkjABbsqqXj821OViAQPqlvVuYUs+9 wwUNWQPybJhHEvQPHRHKWNGVLA8OHw4eFjHMdas4H4GKXaTyuKZ3KMlx4oblOy7oI0bL Arsw== X-Gm-Message-State: AOAM532xHQJsakWcpzGUNdcRtCcRHHa+BACMHeHiK0sKFqK0ZOrvthET wZab+qneyK538MizEcV3R9Ba0c47sQr4 X-Google-Smtp-Source: ABdhPJya/MbhjU7kkc4U4AzUZqb+wY7M98VWg+zaBj67qP3YcZqCOUfiRO7bFu/IGapzwCP0yufBJPO6cmsF X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a17:903:1d1:b0:151:9e73:61b1 with SMTP id e17-20020a17090301d100b001519e7361b1mr5839088plh.84.1646930771200; Thu, 10 Mar 2022 08:46:11 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:28 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-10-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 09/13] KVM: x86/MMU: Track NX hugepages on a per-VM basis From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Track whether NX hugepages are enabled on a per-VM basis instead of as a host-wide setting. With this commit, the per-VM state will always be the same as the host-wide setting, but in future commits, it will be allowed to differ. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 7 +++++-- arch/x86/kvm/mmu/spte.c | 7 ++++--- arch/x86/kvm/mmu/spte.h | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- 6 files changed, 19 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f72e80178ffc..0a0c54639dd8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1240,6 +1240,8 @@ struct kvm_arch { hpa_t hv_root_tdp; spinlock_t hv_root_tdp_lock; #endif + + bool nx_huge_pages; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index bf8dbc4bb12a..dd28fe8d13ae 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -173,9 +173,9 @@ struct kvm_page_fault { int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); extern int nx_huge_pages; -static inline bool is_nx_huge_page_enabled(void) +static inline bool is_nx_huge_page_enabled(struct kvm *kvm) { - return READ_ONCE(nx_huge_pages); + return READ_ONCE(kvm->arch.nx_huge_pages); } static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, @@ -191,8 +191,8 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .user = err & PFERR_USER_MASK, .prefetch = prefetch, .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), - .nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(), - + .nx_huge_page_workaround_enabled = + is_nx_huge_page_enabled(vcpu->kvm), .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1b59b56642f1..dc9672f70468 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6195,8 +6195,10 @@ static void __set_nx_huge_pages(bool val) nx_huge_pages = itlb_multihit_kvm_mitigation = val; } -static int kvm_update_nx_huge_pages(struct kvm *kvm) +static void kvm_update_nx_huge_pages(struct kvm *kvm) { + kvm->arch.nx_huge_pages = nx_huge_pages; + mutex_lock(&kvm->slots_lock); kvm_mmu_zap_all_fast(kvm); mutex_unlock(&kvm->slots_lock); @@ -6227,7 +6229,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) mutex_lock(&kvm_lock); list_for_each_entry(kvm, &vm_list, vm_list) - kvm_set_nx_huge_pages(kvm); + kvm_update_nx_huge_pages(kvm); mutex_unlock(&kvm_lock); } @@ -6448,6 +6450,7 @@ int kvm_mmu_post_init_vm(struct kvm *kvm) { int err; + kvm->arch.nx_huge_pages = READ_ONCE(nx_huge_pages); err = kvm_vm_create_worker_thread(kvm, kvm_nx_lpage_recovery_worker, 0, "kvm-nx-lpage-recovery", &kvm->arch.nx_lpage_recovery_thread); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4739b53c9734..877ad30bc7ad 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -116,7 +116,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, spte |= spte_shadow_accessed_mask(spte); if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) && - is_nx_huge_page_enabled()) { + is_nx_huge_page_enabled(vcpu->kvm)) { pte_access &= ~ACC_EXEC_MASK; } @@ -215,7 +215,8 @@ static u64 make_spte_executable(u64 spte) * This is used during huge page splitting to build the SPTEs that make up the * new page table. */ -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) +u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, int huge_level, + int index) { u64 child_spte; int child_level; @@ -243,7 +244,7 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) * When splitting to a 4K page, mark the page executable as the * NX hugepage mitigation no longer applies. */ - if (is_nx_huge_page_enabled()) + if (is_nx_huge_page_enabled(kvm)) child_spte = make_spte_executable(child_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 73f12615416f..e4142caff4b1 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -415,7 +415,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte); -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index); +u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, int huge_level, + int index); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index af60922906ef..98a45a87f0b2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1466,7 +1466,8 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < PT64_ENT_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i); + sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte, + level, i); /* * Replace the huge spte with a pointer to the populated lower level From patchwork Thu Mar 10 16:45:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF69AC433F5 for ; Thu, 10 Mar 2022 16:46:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244168AbiCJQrg (ORCPT ); Thu, 10 Mar 2022 11:47:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244071AbiCJQr2 (ORCPT ); Thu, 10 Mar 2022 11:47:28 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5924C197B73 for ; Thu, 10 Mar 2022 08:46:14 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id q7-20020a63e207000000b003801b9bb18dso3211774pgh.15 for ; Thu, 10 Mar 2022 08:46:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NW+z7HtQmlg5a0U0koIIQucOMtvMCE0mN+TldjWFMzI=; b=A6tZ8fUNYKG6WFUuHJ21b9uCD5dMPkYBBYMpTMl6dEvJNqI5oVKTYYMiWMMhBj2xqA 3KVbpQg3rc1rZ8oNcG59/bueMJfgsOakAHvl/T5d8eP8SXPS+j85IKFUp5xjwJfgIgTh UPGxoOIMIi+w24D+zOkg1hPhVMRxIzy5pL8sh1J1C4qu22NfkuSGvqeNwkYjdPaVtwX/ DLPpcqKCeZkKHFqB7BECyiFMOKg0q5A5xn9LNxX3ACRS+nO/QOqag32TpFVwPchTSpkd LAZSYn4vAYZh1krjAbG+UpB5X2aGByV8XeWwqyr9FguBm8V+P3rGXY2UpKf8yJ5DKb0D Eeqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NW+z7HtQmlg5a0U0koIIQucOMtvMCE0mN+TldjWFMzI=; b=CvmQK6vHXaJcV38nDydt8+XUUrUE2KNKVDmYZRD5MYYbv9IGu6bpeLxypqkrXpF/GX 6Zq/gNzU7vcYltwAFR+YNjNgGGhsMtz95oJccHYT+DvtB4KCZWOrZohu31gnEtjI0Gxe DOVxPuPIhgfvXG2/8AMNPV9snJxYGJErrMTTiIw2FvoqUaBUB7gW/hg8oyZXX9MlPq1+ vSbiD63v9Co7HCvVl/CWW2ZQsh6hLzZoNnPP1YiMtsCQFI8GsOdRqNK2TNxLhflXlG3d xYefkVXi66MHzBhAIwUSv5qRf8zoNHN09J53oEfUjY65UY3XKyQXKEicFKWzbWo3m4Xi fxRQ== X-Gm-Message-State: AOAM531OzgNh5KxpIr3n1l4437k8Ue24W4jx+gyMCKv8v2RNlm+oEMgl 5Wl6Un2/vzFONA/OeSWKJslIicEoC5/O X-Google-Smtp-Source: ABdhPJzEz+KcGZn1TBz/jV3Jq6wYxRTubaL8HoPTXQnSO/rJqUdU3ejJQpxctcAMk9C/B5Af7mWSDlEYVDvP X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a17:90a:2ec2:b0:1bc:8bdd:8cfc with SMTP id h2-20020a17090a2ec200b001bc8bdd8cfcmr17171203pjs.237.1646930773835; Thu, 10 Mar 2022 08:46:13 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:29 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-11-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 10/13] KVM: x86/MMU: Allow NX huge pages to be disabled on a per-vm basis From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In some cases, the NX hugepage mitigation for iTLB multihit is not needed for all guests on a host. Allow disabling the mitigation on a per-VM basis to avoid the performance hit of NX hugepages on trusted workloads. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.h | 1 + arch/x86/kvm/mmu/mmu.c | 6 ++++-- arch/x86/kvm/x86.c | 6 ++++++ include/uapi/linux/kvm.h | 1 + 5 files changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0a0c54639dd8..04ddfc475ce0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1242,6 +1242,7 @@ struct kvm_arch { #endif bool nx_huge_pages; + bool disable_nx_huge_pages; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index dd28fe8d13ae..36d8d84ca6c6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -177,6 +177,7 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm) { return READ_ONCE(kvm->arch.nx_huge_pages); } +void kvm_update_nx_huge_pages(struct kvm *kvm); static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dc9672f70468..a7d387ccfd74 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6195,9 +6195,10 @@ static void __set_nx_huge_pages(bool val) nx_huge_pages = itlb_multihit_kvm_mitigation = val; } -static void kvm_update_nx_huge_pages(struct kvm *kvm) +void kvm_update_nx_huge_pages(struct kvm *kvm) { - kvm->arch.nx_huge_pages = nx_huge_pages; + kvm->arch.nx_huge_pages = nx_huge_pages && + !kvm->arch.disable_nx_huge_pages; mutex_lock(&kvm->slots_lock); kvm_mmu_zap_all_fast(kvm); @@ -6451,6 +6452,7 @@ int kvm_mmu_post_init_vm(struct kvm *kvm) int err; kvm->arch.nx_huge_pages = READ_ONCE(nx_huge_pages); + kvm->arch.disable_nx_huge_pages = false; err = kvm_vm_create_worker_thread(kvm, kvm_nx_lpage_recovery_worker, 0, "kvm-nx-lpage-recovery", &kvm->arch.nx_lpage_recovery_thread); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 51106d32f04e..73df90a6932b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4256,6 +4256,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SYS_ATTRIBUTES: case KVM_CAP_VAPIC: case KVM_CAP_ENABLE_CAP: + case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: r = 1; break; case KVM_CAP_EXIT_HYPERCALL: @@ -6048,6 +6049,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, } mutex_unlock(&kvm->lock); break; + case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: + kvm->arch.disable_nx_huge_pages = true; + kvm_update_nx_huge_pages(kvm); + r = 0; + break; default: r = -EINVAL; break; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index ee5cc9e2a837..6f9fa7ecfd1e 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_MEM_OP_EXTENSION 211 #define KVM_CAP_PMU_CAPABILITY 212 #define KVM_CAP_DISABLE_QUIRKS2 213 +#define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 214 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Thu Mar 10 16:45:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB4D1C433F5 for ; Thu, 10 Mar 2022 16:47:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244279AbiCJQrp (ORCPT ); Thu, 10 Mar 2022 11:47:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244125AbiCJQr3 (ORCPT ); Thu, 10 Mar 2022 11:47:29 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64F221986EA for ; Thu, 10 Mar 2022 08:46:17 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id e13-20020a17090301cd00b00150145346f9so2958342plh.23 for ; Thu, 10 Mar 2022 08:46:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6/GjUUfRe2TednvAxOrIHMT0ZcA0ouG3TTihxHdaCW0=; b=Ci9RymXtUyaxSqLze1CsiykAC+coEyY7bha7vdnz2IIfbsq+NipcxCu5E/6SKCYP8P AZWfyT5rXjCdQA1Gc4bceeNSCG3fmAHG5yIaY6IZRZjo4kFmN0YpfdldarRV/GfHnuBu S6vSmARKH46PTiE2z01E5t/V2c3KvFKPvZOphqNi9lLlORngKiLyJmhydv2dYqkYeqtA 1c/QNL7R77dPvsGRnw1KCbfnc/kg1T0Dxlq2eZnnBoYWzQ1Jma9qeKXqm9VBmtJC6bEA JuLGJJoLaYlYhGieelYs4IiVe8BaquDMzGy0TH0xQKbMAD3FEy/HRFuNfprA6v5y0zjf MNeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6/GjUUfRe2TednvAxOrIHMT0ZcA0ouG3TTihxHdaCW0=; b=FWJi/Gu68m78ey+15p0wGvzOyMMhv4wU4Cz0LZPyy6Se+x2hQHEf2grIp9B8pnbZdU llybw5PP7DBSKgtO5P/z7S0KgIMN6ZoXhv2nLiL+LIw2JR3OKyypBaqrD/hFTvG/iROQ 5pzulVhFjg+CN/cm8dDvcwcwbH/OiOiJfWOPJZBGoHay3faw2g5nygnb9cLb1zeytHyh tCL6Jylq9GCQjpJKRdUIPAbkQ2vnEfWuEroIJMzo9fXAIuM65EieoYiF91pJ4N+HgTn2 B58r7e6l+zG4z8TShbVy0etbPbJstlVi6Ewt65cd0OG6QZuxe4s7p3vsrxqi64+hjULd YK2w== X-Gm-Message-State: AOAM532tf8OjSEB+fivbvneMNcO9Ikb4WGF5AdLQ4+9csuSBj2whFmxv WH8c3YJ2O8cdWgZY2+vfiMjUWNdkkMcZ X-Google-Smtp-Source: ABdhPJzwNz0xJOQ5UQ4Rax3VSodtAuXU0L+axkICu24kh0fHPhU/xTF0kOdXzALpDVGWwJbMh3I+GHUH8LsJ X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:aa7:8889:0:b0:4f7:7283:e378 with SMTP id z9-20020aa78889000000b004f77283e378mr4535906pfe.36.1646930776770; Thu, 10 Mar 2022 08:46:16 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:30 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-12-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 11/13] KVM: x86: Fix errant brace in KVM capability handling From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The braces around the KVM_CAP_XSAVE2 block also surround the KVM_CAP_PMU_CAPABILITY block, likely the result of a merge issue. Simply move the curly brace back to where it belongs. Fixes: ba7bb663f5547 ("KVM: x86: Provide per VM capability for disabling PMU virtualization") Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 73df90a6932b..74351cbb9b5b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4352,10 +4352,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) if (r < sizeof(struct kvm_xsave)) r = sizeof(struct kvm_xsave); break; + } case KVM_CAP_PMU_CAPABILITY: r = enable_pmu ? KVM_CAP_PMU_VALID_MASK : 0; break; - } case KVM_CAP_DISABLE_QUIRKS2: r = KVM_X86_VALID_QUIRKS; break; From patchwork Thu Mar 10 16:45:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C94CC433EF for ; Thu, 10 Mar 2022 16:47:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244166AbiCJQrm (ORCPT ); Thu, 10 Mar 2022 11:47:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244180AbiCJQrb (ORCPT ); Thu, 10 Mar 2022 11:47:31 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A88C198D1E for ; Thu, 10 Mar 2022 08:46:20 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id b4-20020a170902a9c400b001532ec9005aso195041plr.10 for ; Thu, 10 Mar 2022 08:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OmK9ig5jJPqHc1lT2kwZLVXWHtYR0m8aDz/I++ZnaZg=; b=HdFmzmvSIq4czsk8EyXnKZqkaBCWAJbd0VwxsSv4+1Yw7/3bw6MYRa7aMqdk0mV1e1 VHNRH9xx/5Te44HQJHZxuyptGPbnXf/j+pXB8dusDq5udVzrfFDbXHWHGiOxW42yBKen taetvPR9Zt8E8HQhExhww8N+Q6aj4bq83pNPzWJvmGhzYl+GVOBOWmdrug0kgrir4KZz L6/euPHhjKO+W6ssAnsgPJ21R2zI0j4v2cB3usLUUpZK8YA/1cRRoCXlU3X5FAklbDdX P1J5k3k7z+EPGWsSylCef3Wx0ns0t++tyYbzx6fhn5LJuHFXaBPNV8cRK6oFrWrcojdz pcgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OmK9ig5jJPqHc1lT2kwZLVXWHtYR0m8aDz/I++ZnaZg=; b=QmAasxVPuWQlEWBcT0VcwhrAOO2JV4rGwMb4YNXfziHvzhEYxKb/XQ+XaPB28X3awM ngDI/Lnan8GBLYjML/UVsBd/Qus+LZtFHh3/fQ0Cq+CmwRjwjttzapkGyOGJs9DSslmk ubxSofsnSgJdsl0z9+5qO/ah6BpgLCX4Phm7gANrUfLxEn5l25EhwWO4GAT8eWSJfRag cVmogZm7hLzbD/670OTb5UZkttUVYh5Wmw10I/JsdUBM2CotjmPA5ylxbSNwn4YD/ye6 dQY7yiNWJjQWL++3bcbWNI9iOXFRGGhSV45U+prpnxyHFWSIbdra2ZkST7K2O+qDGOl5 ORUQ== X-Gm-Message-State: AOAM530gfcvDmZ0hZgvCSd8CKAmpA5gVSxPuOt+5faXWTZNU3DoZYS1C JsJtAU+mxDRkZuVEI2IhLVkwKKPLuWh6 X-Google-Smtp-Source: ABdhPJwrSHkEApdTW6RNQ1Yq3YoC+LnnBLVKJr3xZgwX2aqe4qJL7KcpDIGjBo9jF5gVpEqZyPMtdxBy110l X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a17:90b:1e10:b0:1bf:6c78:54a9 with SMTP id pg16-20020a17090b1e1000b001bf6c7854a9mr94496pjb.1.1646930779582; Thu, 10 Mar 2022 08:46:19 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:31 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-13-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 12/13] KVM: x86/MMU: Require reboot permission to disable NX hugepages From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Ensure that the userspace actor attempting to disable NX hugepages has permission to reboot the system. Since disabling NX hugepages would allow a guest to crash the system, it is similar to reboot permissions. This approach is the simplest permission gating, but passing a file descriptor opened for write for the module parameter would also work well and be more precise. The latter approach was suggested by Sean Christopherson. Suggested-by: Jim Mattson Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 74351cbb9b5b..995f30667619 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4256,7 +4256,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SYS_ATTRIBUTES: case KVM_CAP_VAPIC: case KVM_CAP_ENABLE_CAP: - case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: r = 1; break; case KVM_CAP_EXIT_HYPERCALL: @@ -4359,6 +4358,14 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_DISABLE_QUIRKS2: r = KVM_X86_VALID_QUIRKS; break; + case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: + /* + * Since the risk of disabling NX hugepages is a guest crashing + * the system, ensure the userspace process has permission to + * reboot the system. + */ + r = capable(CAP_SYS_BOOT); + break; default: break; } @@ -6050,6 +6057,15 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, mutex_unlock(&kvm->lock); break; case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: + /* + * Since the risk of disabling NX hugepages is a guest crashing + * the system, ensure the userspace process has permission to + * reboot the system. + */ + if (!capable(CAP_SYS_BOOT)) { + r = -EPERM; + break; + } kvm->arch.disable_nx_huge_pages = true; kvm_update_nx_huge_pages(kvm); r = 0; From patchwork Thu Mar 10 16:45:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12776759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DA83C433FE for ; Thu, 10 Mar 2022 16:47:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229585AbiCJQsZ (ORCPT ); Thu, 10 Mar 2022 11:48:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244133AbiCJQrg (ORCPT ); Thu, 10 Mar 2022 11:47:36 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06F03198EC8 for ; Thu, 10 Mar 2022 08:46:23 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id o41-20020a17090a0a2c00b001bf06e5badfso3662418pjo.3 for ; Thu, 10 Mar 2022 08:46:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lSRTllOR9r6/nLyxV36+XgGPCp0gJBo+opj3caznl+k=; b=ectWumox3D7lwv3ygc0FdA62EqZF3SruvaSume3D5rakSI7TpPx3hcZuhxP6/9u1Lf /qBK5JVoPUFvR+moyFhzUibr6F075GEVqrzdZ26n+fqzWdU6QrkETCPG632GLVrM6wX6 Cytt7F/5aR60/f8XrR19dRV3b8PAWuo6O3tW01nA1/wKFAw8if6nSY87Ba0efOlcdpe7 G/07cRhUj1Mz3Z6lbtDbKuZoRFydKAFVQN1etgCFSyG372u+iO1IwIaTRKJ/t/9mwIcp JhKd5sUgRnNnq2BarHsaQm1Kd7RI11qGHd5WdxHf9LOcRG29qXmdMzaTQ5kuOyHXRZvL CbxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lSRTllOR9r6/nLyxV36+XgGPCp0gJBo+opj3caznl+k=; b=SmpH5HjFlz5M4PaLWI+OJb2HH/CdfTtQGxCbwRHEd/+pufxZp0dD00pbCUnKdw9BAe 5racONWVX231SwBd7ugoJjio/W8vygpJ7QxqSxyr9O02XXFyIqzYbyjKMvpDBRwd19if 2nFKL/m1SOwufX4makQknjKzG5ypjf1mKRIxCdYNlluZquye++fXaLuHfbcwmk8Lg9EA 6unigS31iu9i57lHdR1xeJhNpY+4/rEtc1Dem31DYj0kSU5UJabdZuNM1x38JZhjkCjD FCgAUK5SSsK6O32lcB0xs7BTwipMwN11vVV1SMheE97LDbKKK1iCUDdfpTbsw15jTkOl bHeA== X-Gm-Message-State: AOAM530/mTKDwHOBbEHS0mi1X1bXLAt+Ryea+SgEpDYfKcFtVwVDYnPK yAKLmXHq7lcPSHCilj3gcvalJ8V1BmvT X-Google-Smtp-Source: ABdhPJxl3fnS2a59GPzC3rTVspwk/LybLE/JpU6Fr2EWIc2Lsku1IXeppMG3yQI2T4+MqEuXaj7YPi4mpTIJ X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2d58:733f:1853:8e86]) (user=bgardon job=sendgmr) by 2002:a17:903:4054:b0:151:be03:2994 with SMTP id n20-20020a170903405400b00151be032994mr6045519pla.77.1646930782505; Thu, 10 Mar 2022 08:46:22 -0800 (PST) Date: Thu, 10 Mar 2022 08:45:32 -0800 In-Reply-To: <20220310164532.1821490-1-bgardon@google.com> Message-Id: <20220310164532.1821490-14-bgardon@google.com> Mime-Version: 1.0 References: <20220310164532.1821490-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH 13/13] selftests: KVM: Test disabling NX hugepages on a VM From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an argument to the NX huge pages test to test disabling the feature on a VM using the new capability. Signed-off-by: Ben Gardon --- .../selftests/kvm/include/kvm_util_base.h | 2 + tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++ .../selftests/kvm/x86_64/nx_huge_pages_test.c | 49 ++++++++++++++----- .../kvm/x86_64/nx_huge_pages_test.sh | 2 +- 4 files changed, 48 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 530b5272fae2..8302cf9b1e1d 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -420,4 +420,6 @@ uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name); uint32_t guest_get_vcpuid(void); +void vm_disable_nx_huge_pages(struct kvm_vm *vm); + #endif /* SELFTEST_KVM_UTIL_BASE_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index f9591dad1010..880786fe9fac 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2760,3 +2760,10 @@ uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name) return value; } +void vm_disable_nx_huge_pages(struct kvm_vm *vm) +{ + struct kvm_enable_cap cap = { 0 }; + + cap.cap = KVM_CAP_VM_DISABLE_NX_HUGE_PAGES; + vm_enable_cap(vm, &cap); +} diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c index 5cbcc777d0ab..1020a4758664 100644 --- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c @@ -56,12 +56,39 @@ static void check_split_count(struct kvm_vm *vm, int expected_splits) expected_splits, actual_splits); } +static void help(void) +{ + puts(""); + printf("usage: nx_huge_pages_test.sh [-x]\n"); + puts(""); + printf(" -x: Allow executable huge pages on the VM.\n"); + puts(""); + exit(0); +} + int main(int argc, char **argv) { struct kvm_vm *vm; + bool disable_nx = false; + int opt; + + while ((opt = getopt(argc, argv, "x")) != -1) { + switch (opt) { + case 'x': + disable_nx = true; + break; + case 'h': + default: + help(); + break; + } + } vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + if (disable_nx) + vm_disable_nx_huge_pages(vm); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB, HPAGE_PADDR_START, HPAGE_SLOT, HPAGE_SLOT_NPAGES, 0); @@ -81,25 +108,25 @@ int main(int argc, char **argv) * at 2M. */ run_guest_code(vm, guest_code0); - check_2m_page_count(vm, 2); - check_split_count(vm, 2); + check_2m_page_count(vm, disable_nx ? 4 : 2); + check_split_count(vm, disable_nx ? 0 : 2); /* * guest_code1 is in the same huge page as data1, so it will cause * that huge page to be remapped at 4k. */ run_guest_code(vm, guest_code1); - check_2m_page_count(vm, 1); - check_split_count(vm, 3); + check_2m_page_count(vm, disable_nx ? 4 : 1); + check_split_count(vm, disable_nx ? 0 : 3); /* Run guest_code0 again to check that is has no effect. */ run_guest_code(vm, guest_code0); - check_2m_page_count(vm, 1); - check_split_count(vm, 3); + check_2m_page_count(vm, disable_nx ? 4 : 1); + check_split_count(vm, disable_nx ? 0 : 3); /* Give recovery thread time to run */ sleep(3); - check_2m_page_count(vm, 1); + check_2m_page_count(vm, disable_nx ? 4 : 1); check_split_count(vm, 0); /* @@ -107,13 +134,13 @@ int main(int argc, char **argv) * again to check that pages are mapped at 2M again. */ run_guest_code(vm, guest_code0); - check_2m_page_count(vm, 2); - check_split_count(vm, 2); + check_2m_page_count(vm, disable_nx ? 4 : 2); + check_split_count(vm, disable_nx ? 0 : 2); /* Pages are once again split from running guest_code1. */ run_guest_code(vm, guest_code1); - check_2m_page_count(vm, 1); - check_split_count(vm, 3); + check_2m_page_count(vm, disable_nx ? 4 : 1); + check_split_count(vm, disable_nx ? 0 : 3); kvm_vm_free(vm); diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh index a5f946fb0626..205d8c9fd750 100755 --- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh @@ -14,7 +14,7 @@ echo 1 > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio echo 2 > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms echo 200 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages -./nx_huge_pages_test +./nx_huge_pages_test "${@}" RET=$? echo $NX_HUGE_PAGES > /sys/module/kvm/parameters/nx_huge_pages