From patchwork Thu Sep 29 18:12:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12994541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A390C433FE for ; Thu, 29 Sep 2022 18:12:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235011AbiI2SMS (ORCPT ); Thu, 29 Sep 2022 14:12:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229451AbiI2SMQ (ORCPT ); Thu, 29 Sep 2022 14:12:16 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E0981F44E9 for ; Thu, 29 Sep 2022 11:12:15 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id y65-20020a25c844000000b006bb773548d5so1785537ybf.5 for ; Thu, 29 Sep 2022 11:12:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=3AvudpWWP8E1u5vk+zIaGFx/CkuEhsPoMtC12WcbCT0=; b=FX1cs1fXaycaF5z7bF+lr3EEyHSBZWeuZalxdHQLqSGwx/aJjl917ug6ZGsiXxt0cR LGIIFThkbWQnp/G9v7XX9TLF/Pq72sMrFOpArsR7xLBmpK5PVkOW+JriGQ+SDh8dK7cH o5wY2GH4z9uWxS25OF8cxMuJ3t+NF9LVj2n2zC1OUmK1sXNUIe5laocGzS/kMWdTe/9f LdMCOWvI7gaTaJ+1zYLJN0Wv3kjH2j5L6yBxZL6l/51LAN4yFfyYfoClB7bysHdka/zY xanItoeT11sKFEEf8NA5Ys0zh7JD92dIEICixiK+zjS2hiOhU+vNAdgW9ePHcONOK31V vG2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=3AvudpWWP8E1u5vk+zIaGFx/CkuEhsPoMtC12WcbCT0=; b=Thh9d1HoBZbBMvfzbUR5YcuWFczbdfMmcRy6ciFeNhhqqrhMSNoCMjow2J78o8GfY2 0utZYJFsQyAkvdXpkpMfRo++WR38pOEw8u7SgAVLB6FaHBG6Kfh1E/xn13g/lEvmgAme NlIFYxyqlctPHvTPOa58QGT8dVymrq0ImfxxDPJJoVGKXJDDLyQ5EaVJp8keIb0sT2OX yZ1ZIYmA8yj3VTIEXXgHRZEt+ekC4Za5AyHyaH/rorb+lK8Hh4raxDnC8yMeLMagi3fR jlSorJK3D2bvQQiJu8eYHchrcFpGRejCxkVGHZqxsC2Kzdx1TtyIMV0xRN0D7EY0mjFL eZnw== X-Gm-Message-State: ACrzQf0NthNAmBQCq+X6wx/ZKcp8B3yQsde/5CExtFL8RBmlcb7b7QYO T10kyrIfMGmQjgVAMJ5uLl6014sSA0HvsQ== X-Google-Smtp-Source: AMsMyM6X7UeBhfKYz8pt95A0pTIjBOD/cB6xgYfIhxhJpevbccaMwVa6Swu3y/vaHpbZQSs7+7RszMTbQkXbQg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6902:150d:b0:6bc:6e50:d0d8 with SMTP id q13-20020a056902150d00b006bc6e50d0d8mr4613626ybu.416.1664475134658; Thu, 29 Sep 2022 11:12:14 -0700 (PDT) Date: Thu, 29 Sep 2022 11:12:05 -0700 In-Reply-To: <20220929181207.2281449-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220929181207.2281449-1-dmatlack@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20220929181207.2281449-2-dmatlack@google.com> Subject: [PATCH v3 1/3] KVM: selftests: Tell the compiler that code after TEST_FAIL() is unreachable From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Andrew Jones , David Matlack , Ben Gardon , Peter Xu , Jim Mattson , Ricardo Koller , Yang Zhong , Wei Wang , kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add __builtin_unreachable() to TEST_FAIL() so that the compiler knows that any code after a TEST_FAIL() is unreachable. Signed-off-by: David Matlack --- tools/testing/selftests/kvm/include/test_util.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index 5c5a88180b6c..befc754ce9b3 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -63,8 +63,10 @@ void test_assert(bool exp, const char *exp_str, #a, #b, #a, (unsigned long) __a, #b, (unsigned long) __b); \ } while (0) -#define TEST_FAIL(fmt, ...) \ - TEST_ASSERT(false, fmt, ##__VA_ARGS__) +#define TEST_FAIL(fmt, ...) do { \ + TEST_ASSERT(false, fmt, ##__VA_ARGS__); \ + __builtin_unreachable(); \ +} while (0) size_t parse_size(const char *size); From patchwork Thu Sep 29 18:12:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12994542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9D62C4332F for ; Thu, 29 Sep 2022 18:12:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235024AbiI2SMV (ORCPT ); Thu, 29 Sep 2022 14:12:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234990AbiI2SMS (ORCPT ); Thu, 29 Sep 2022 14:12:18 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A49871F495A for ; Thu, 29 Sep 2022 11:12:17 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id h4-20020a5b02c4000000b006bc192d672bso1778084ybp.22 for ; Thu, 29 Sep 2022 11:12:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=WdOhHa7+GA5s0KZZ8XCKCE1bphG4OAXLOkA0pJgE3Os=; b=stjYVUwzQIbd/qgTXMD6yVI4oxEVUDDoi64OB50zDr64OYNLsqXdI1H0tAiA+WXtzL Q/FNKP4aJT6FdjXtVZrgwyHj585ghCU/XlXTOy/QYXLhG22XUR3iAhl2I20qFI/NcAGN /A4uzLyYVxHlQfPURb4pF0fVUGql48YfH+I3LHPSyZ8pJrDKJgXQF8Luo9jywb6cetHz szaQiIFXSpDhxvskjYH3G3r4i6FQOb0v01pVYLfztMrG3q1UqkwQ+LnqJvuAI3vXv7Mf xU8+6TL6anL45leN+QHNgH7aSPCs/wPFAQ5sbHsF2VZ1AJ+e6Ka/sOrkyVyqXrWZLiNt t8xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=WdOhHa7+GA5s0KZZ8XCKCE1bphG4OAXLOkA0pJgE3Os=; b=4cJcQ7MOKXHXdaPZWdFDkj3zyKZG7XdHvF2Vruca86YddSU42qJ+Fu3nhOEZNmtZzk YxR/zBOZ76++22f62D3Ku5dc/3tF8GeKM6PyG2d1ekRayZ4HWyGh62g1wQfpTj+REC7H N3S1Oj0lXMpjJaYFmvi4HbqoY1VQHu77bssAn1EiovN0/lXfWcBozd9Zr8Kmw07CA6l4 8hCm7AQY+8djNZrYmq1oJwX+yF+MlLEJvPdYL1fDErhjZ/CEE6YJkx/WzxDVoX2qh9hw GxN9TQjq42SOYU6fFdP+8JV1/NNCRtwAC+iSw3vsBrd5kwd7CJkyE++n6bn5YMZ99mIZ sq8g== X-Gm-Message-State: ACrzQf0Vw1hoFUXOak7AN/eR4XQlhctJp1PsfSb/Bp8MGVlcDTMeUolA 1KthTrBrpmYuf6mjfqegk0g5JVMqWSOERQ== X-Google-Smtp-Source: AMsMyM5kFrw+QbXItleB5dC02orYJ5kTE8unPYNGJKaAzAwpmYLSLYo3Qrd3HTSmAxcN1vI+x5DvTd7Zh2OETg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:73d7:0:b0:352:90cc:c939 with SMTP id o206-20020a8173d7000000b0035290ccc939mr4517053ywc.150.1664475136900; Thu, 29 Sep 2022 11:12:16 -0700 (PDT) Date: Thu, 29 Sep 2022 11:12:06 -0700 In-Reply-To: <20220929181207.2281449-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220929181207.2281449-1-dmatlack@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20220929181207.2281449-3-dmatlack@google.com> Subject: [PATCH v3 2/3] KVM: selftests: Add helpers to read kvm_{intel,amd} boolean module parameters From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Andrew Jones , David Matlack , Ben Gardon , Peter Xu , Jim Mattson , Ricardo Koller , Yang Zhong , Wei Wang , kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add helper functions for reading the value of kvm_intel and kvm_amd boolean module parameters. Use the kvm_intel variant in vm_is_unrestricted_guest() to simplify the check for kvm_intel.unrestricted_guest. No functional change intended. Signed-off-by: David Matlack --- .../selftests/kvm/include/kvm_util_base.h | 4 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 39 +++++++++++++++++++ .../selftests/kvm/lib/x86_64/processor.c | 13 +------ 3 files changed, 44 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 24fde97f6121..e42a09cd24a0 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -175,6 +175,10 @@ extern const struct vm_guest_mode_params vm_guest_mode_params[]; int open_path_or_exit(const char *path, int flags); int open_kvm_dev_path_or_exit(void); + +bool get_kvm_intel_param_bool(const char *param); +bool get_kvm_amd_param_bool(const char *param); + unsigned int kvm_check_cap(long cap); static inline bool kvm_has_cap(long cap) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 9889fe0d8919..504c1e1355c3 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -50,6 +50,45 @@ int open_kvm_dev_path_or_exit(void) return _open_kvm_dev_path_or_exit(O_RDONLY); } +static bool get_module_param_bool(const char *module_name, const char *param) +{ + const int path_size = 128; + char path[path_size]; + char value; + ssize_t r; + int fd; + + r = snprintf(path, path_size, "/sys/module/%s/parameters/%s", + module_name, param); + TEST_ASSERT(r < path_size, + "Failed to construct sysfs path in %d bytes.", path_size); + + fd = open_path_or_exit(path, O_RDONLY); + + r = read(fd, &value, 1); + TEST_ASSERT(r == 1, "read(%s) failed", path); + + r = close(fd); + TEST_ASSERT(!r, "close(%s) failed", path); + + if (value == 'Y') + return true; + else if (value == 'N') + return false; + + TEST_FAIL("Unrecognized value '%c' for boolean module param", value); +} + +bool get_kvm_intel_param_bool(const char *param) +{ + return get_module_param_bool("kvm_intel", param); +} + +bool get_kvm_amd_param_bool(const char *param) +{ + return get_module_param_bool("kvm_amd", param); +} + /* * Capability * diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 2e6e61bbe81b..fab0f526fb81 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1294,20 +1294,9 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm) /* Returns true if kvm_intel was loaded with unrestricted_guest=1. */ bool vm_is_unrestricted_guest(struct kvm_vm *vm) { - char val = 'N'; - size_t count; - FILE *f; - /* Ensure that a KVM vendor-specific module is loaded. */ if (vm == NULL) close(open_kvm_dev_path_or_exit()); - f = fopen("/sys/module/kvm_intel/parameters/unrestricted_guest", "r"); - if (f) { - count = fread(&val, sizeof(char), 1, f); - TEST_ASSERT(count == 1, "Unable to read from param file."); - fclose(f); - } - - return val == 'Y'; + return get_kvm_intel_param_bool("unrestricted_guest"); } From patchwork Thu Sep 29 18:12:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12994543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB31CC433FE for ; Thu, 29 Sep 2022 18:12:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235242AbiI2SMX (ORCPT ); Thu, 29 Sep 2022 14:12:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235182AbiI2SMU (ORCPT ); Thu, 29 Sep 2022 14:12:20 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0BD41F4968 for ; Thu, 29 Sep 2022 11:12:19 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id y65-20020a25c844000000b006bb773548d5so1785747ybf.5 for ; Thu, 29 Sep 2022 11:12:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=vHwzS54Y53ehavJ4BjQpYa5xOZkQhZYDuoBBs+bJ/jI=; b=lwDERgBwa1EJAvI80jJ6JXqDMnDLG9U/Vd9k65ZLWYAKQCpmLpxraEW2TPoJSBKFyR K3yZ+t/RoUxnvncJ5uyv4TcLJbpXBl/y1yLDfPV8l0E5oL+jigMh12YjAQYm14rKMjs8 dj25DGEvxTNoeASvdONfJTYJgmJxH/DnhcQKof7nJEjuIhW8uuDOVRJ2AaLx2XI9AGjH 6VndFybeBpBa7usrqIPHAyq8L/o/CjkAqe5Af2Kf0TwkCzzZKPjES22vXHKq572fAk05 YSgv+KNuuGCmIRLN9R1YOUURyBxuFujuC1ncD7i3C1VGY1gZImWiKRteUFje9J+WbzX/ iT6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=vHwzS54Y53ehavJ4BjQpYa5xOZkQhZYDuoBBs+bJ/jI=; b=7rGnwF7B3z3SNpDepgmlbMmp1v+h0H6Onp5DSrWuBxUhkhS9FoTXI1F2ga4C90du42 3wAjhKDCj/YgJ/5lZ+1A0tHHijjLZtPdE92TjU+tHbnv6rWBbqScysr/hZ57MQ8BKTzI OQf1DOmjiwgKeKPpYa5LpWfmSZ7zqoTzqKKwmS164H8c3JQChLrSwSlwBVm1/zWQ6nk6 GayS69Fdrc/Z6VgPn0r4RRju+1YnJSY0nXyfkMsWe2ZnnOv+qB4kNzjRffDEmM/0ZLfP tMn/y8kAYt59LxVjAAzYdiaf9FJiH6tobnRry41yZCgKB5VecHNuiNZVitFlueaUD19y R4rg== X-Gm-Message-State: ACrzQf2bJPK+e1NtTi4kNmIL1l/9s/DRtXitgpqx8q8Y72Iam61F3+BQ d7AnJOs0RO6hhmQYhd1k3KHst+4n5/7wxA== X-Google-Smtp-Source: AMsMyM4zTO9rm9N+mHBBXa5+EZ9+aFDcbTzTRBBDnOZfEUlZa+Yt5h0701ROfsGLmU66kA88J+R37wdNVRHEsQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:2448:0:b0:6bc:a7ee:60ba with SMTP id k69-20020a252448000000b006bca7ee60bamr4582854ybk.232.1664475139405; Thu, 29 Sep 2022 11:12:19 -0700 (PDT) Date: Thu, 29 Sep 2022 11:12:07 -0700 In-Reply-To: <20220929181207.2281449-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220929181207.2281449-1-dmatlack@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20220929181207.2281449-4-dmatlack@google.com> Subject: [PATCH v3 3/3] KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Andrew Jones , David Matlack , Ben Gardon , Peter Xu , Jim Mattson , Ricardo Koller , Yang Zhong , Wei Wang , kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Map the test's huge page region with 2MiB virtual mappings when TDP is disabled so that KVM can shadow the region with huge pages. This fixes nx_huge_pages_test on hosts where TDP hardware support is disabled. Purposely do not skip this test on TDP-disabled hosts. While we don't care about NX Huge Pages on TDP-disabled hosts from a security perspective, KVM does support it, and so we should test it. For TDP-enabled hosts, continue mapping the region with 4KiB pages to ensure that KVM can map it with huge pages irrespective of the guest mappings. Fixes: 8448ec5993be ("KVM: selftests: Add NX huge pages test") Signed-off-by: David Matlack --- .../selftests/kvm/include/x86_64/processor.h | 4 +++ .../selftests/kvm/lib/x86_64/processor.c | 27 +++++++++++++++++++ .../selftests/kvm/x86_64/nx_huge_pages_test.c | 19 +++++++++++-- 3 files changed, 48 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 0cbc71b7af50..e8ca0d8a6a7e 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -825,6 +825,8 @@ static inline uint8_t wrmsr_safe(uint32_t msr, uint64_t val) return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr)); } +bool kvm_is_tdp_enabled(void); + uint64_t vm_get_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu, uint64_t vaddr); void vm_set_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu, @@ -855,6 +857,8 @@ enum pg_level { #define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G) void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); +void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + uint64_t nr_bytes, int level); /* * Basic CPU control in CR0 diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index fab0f526fb81..39c4409ef56a 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -111,6 +111,14 @@ static void sregs_dump(FILE *stream, struct kvm_sregs *sregs, uint8_t indent) } } +bool kvm_is_tdp_enabled(void) +{ + if (is_intel_cpu()) + return get_kvm_intel_param_bool("ept"); + else + return get_kvm_amd_param_bool("npt"); +} + void virt_arch_pgd_alloc(struct kvm_vm *vm) { TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " @@ -214,6 +222,25 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } +void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + uint64_t nr_bytes, int level) +{ + uint64_t pg_size = PG_LEVEL_SIZE(level); + uint64_t nr_pages = nr_bytes / pg_size; + int i; + + TEST_ASSERT(nr_bytes % pg_size == 0, + "Region size not aligned: nr_bytes: 0x%lx, page size: 0x%lx", + nr_bytes, pg_size); + + for (i = 0; i < nr_pages; i++) { + __virt_pg_map(vm, vaddr, paddr, level); + + vaddr += pg_size; + paddr += pg_size; + } +} + static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu, uint64_t vaddr) diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c index cc6421716400..8c1181a5ba56 100644 --- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c @@ -112,6 +112,7 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages, { struct kvm_vcpu *vcpu; struct kvm_vm *vm; + uint64_t nr_bytes; void *hva; int r; @@ -141,10 +142,24 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages, HPAGE_GPA, HPAGE_SLOT, HPAGE_SLOT_NPAGES, 0); - virt_map(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_NPAGES); + nr_bytes = HPAGE_SLOT_NPAGES * vm->page_size; + + /* + * Ensure that KVM can map HPAGE_SLOT with huge pages by mapping the + * region into the guest with 2MiB pages whenever TDP is disabled (i.e. + * whenever KVM is shadowing the guest page tables). + * + * When TDP is enabled, KVM should be able to map HPAGE_SLOT with huge + * pages irrespective of the guest page size, so map with 4KiB pages + * to test that that is the case. + */ + if (kvm_is_tdp_enabled()) + virt_map_level(vm, HPAGE_GVA, HPAGE_GPA, nr_bytes, PG_LEVEL_4K); + else + virt_map_level(vm, HPAGE_GVA, HPAGE_GPA, nr_bytes, PG_LEVEL_2M); hva = addr_gpa2hva(vm, HPAGE_GPA); - memset(hva, RETURN_OPCODE, HPAGE_SLOT_NPAGES * PAGE_SIZE); + memset(hva, RETURN_OPCODE, nr_bytes); check_2m_page_count(vm, 0); check_split_count(vm, 0);