From patchwork Mon Mar 21 23:48:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9620DC433F5 for ; Mon, 21 Mar 2022 23:49:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233502AbiCUXul (ORCPT ); Mon, 21 Mar 2022 19:50:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233456AbiCUXuj (ORCPT ); Mon, 21 Mar 2022 19:50:39 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC3AA1959FD for ; Mon, 21 Mar 2022 16:48:57 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id om8-20020a17090b3a8800b001c68e7ccd5fso431415pjb.9 for ; Mon, 21 Mar 2022 16:48:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EhaQr6cKEQxCagUKQDNvVN2gjFiU04Vf3hdo4H9ToT8=; b=LRu9JAlfEaGCxcxkVvhOqyuO9Ev7D/vl4Pf/RA6vqjjHKGt5GwqsGD29/SsNRpoMD2 fzaqBDfIjsRVPD1GLxE+QKzRWJS0n4XxUBoSr0SgfbhcLhvifedWiq10enZOj+mLXHQ+ uIGH9qwQ32EqaT4uEOGXSlhp6yOwn3Gj33YYYFc/pKC7yOOy/WFhVMeDVnDQ2cvMbkfF hb3iHNlf4OmvI270r6uUo5MemY+mxzcLyIhm1hoJ5pjdpKuKnycbIb5xSD9bewlJ4IEN c9G2fa1A6WA0dmG1BsWypEshumvmc7O7KRD84r1MQabHReWOmb8oBualQbrN55aK8vMF 4FvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EhaQr6cKEQxCagUKQDNvVN2gjFiU04Vf3hdo4H9ToT8=; b=JGFWFSkZYu+COYTpOOPhTQDC0C6dJPBerUZjCqy0qIBzA3/QqNRl4A1iep/VOwpPVj pheTMnflVG4AUiRNosS00j9w/BnfRjoZIQxDaOctCY/cNmbRxttFbhQVwVs8sVIdUNiS hzn2698CuKKWOyWUvvZhS8KcR3XAX8BTO8jp/QQ4diZPVOoN7iGyR4Lp4laC/E/8bki1 JL2cHyead8403jf4i3vLeqShnPAJ/fcFQjpqgHuskzwbjQ7Dr5z5347ofFqyHtH+awCT ObUv0n2DCL6yMLtCScfvLlkq+TdVNqEXkrxD/k/Abq0OiRKw4MbF1FfltQEjpcqaUsH1 dN1g== X-Gm-Message-State: AOAM533G09/wGfOxb0No1kt2JX4EHtdhjb9RdT6H1/UQsZcST76nkbf3 Jnyaa0f3YxaOD26p9R2BmyJu6H7/Mdqi X-Google-Smtp-Source: ABdhPJzMWkUMUv4rGL5tgkTUBmZPs2ef10OquxEsuGHsEHCyEwFTqMNpor1IypFOb3rlkMGnHqPOnA/DdQo/ X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a17:90b:3e8c:b0:1c7:462c:af6b with SMTP id rj12-20020a17090b3e8c00b001c7462caf6bmr1691728pjb.150.1647906533585; Mon, 21 Mar 2022 16:48:53 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:34 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-2-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 01/11] KVM: selftests: Add vm_alloc_page_table_in_memslot library function From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ricardo Koller , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ricardo Koller Add a library function to allocate a page-table physical page in a particular memslot. The default behavior is to create new page-table pages in memslot 0. Signed-off-by: Ricardo Koller Reviewed-by: Ben Gardon Signed-off-by: Ben Gardon --- tools/testing/selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 8 +++++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 92cef0ffb19e..976aaaba8769 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -311,6 +311,7 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); +vm_paddr_t vm_alloc_page_table_in_memslot(struct kvm_vm *vm, uint32_t pt_memslot); /* * Create a VM with reasonable defaults diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1665a220abcb..11a692cf4570 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2425,9 +2425,15 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +vm_paddr_t vm_alloc_page_table_in_memslot(struct kvm_vm *vm, uint32_t pt_memslot) +{ + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + pt_memslot); +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + return vm_alloc_page_table_in_memslot(vm, 0); } /* From patchwork Mon Mar 21 23:48:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 588B6C433EF for ; Mon, 21 Mar 2022 23:49:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233512AbiCUXun (ORCPT ); Mon, 21 Mar 2022 19:50:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233444AbiCUXuk (ORCPT ); Mon, 21 Mar 2022 19:50:40 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEC1F195323 for ; Mon, 21 Mar 2022 16:48:58 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 77-20020a621450000000b004fa8868a49eso3937551pfu.3 for ; Mon, 21 Mar 2022 16:48:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=D0DwgEYL9FDtap/ma2duhrafHKDI2k2H7Kdd1DgpGcA=; b=lj0TKTl03MICUUzJ93f3Ojew9olRP8p1BnY2sOeXVZvCD3HdKCY2N2c/GtmUUS4x6Y tKRfNV/MyL9hhjjAX36tJK/CBufA5OeqOWcw2LL3BaO1XCj7ugzVz1rgpNOxZmBKowQT OYeDbEcb2Soy4afqkwYFKJRAlATr3us1x7c+FO7L1JAZRsB/KfFdD2syHI8ks9u16wW7 0noRD+Rezs5xWBJkbs+rDmmP3104i3w+/cnf9n9CREvHlelrYrx9ufjyTyjbFf7o8gSB b4gVNme3LazD8qNoLaR714YqALarCheIKm/7DilxLtR+wHUq8hXhjLCaq2JC0nYBLxvx qWwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=D0DwgEYL9FDtap/ma2duhrafHKDI2k2H7Kdd1DgpGcA=; b=Ffm32aexVO0btU5JaVyiFh6GDionGZwbdnSEwyPo8DajBr9Kd0qMiz8NJRLBkIaBk4 FjjCeq9eVFHscyJ/TYGwP61q55WHL6ogkOKdQSRV2LOeCIV+nxUYYAFHWltemWKzMKoB T9JzhkbPFgyadjh5DxGO7Du3Q3AoQEgGBClg3qvvsvQxkjvEyJlZBRab7bAH+5AqS9S0 fnif9mQARMsUHyNp/PQRMkWls9aIVSNdf2siHyGph0sOwAJKZQYsk0ZsURE8/4Ov2Zyn U9C1p7Q+MveOP2xqqSW96riArL2PZyWnTds0eb0I/25OkMUjF6QuMwdupl57JNGLa33M Nd/w== X-Gm-Message-State: AOAM532gXVqMGIr0Heq34ARq3pcTIdnmGCm+il7pZHIQbG7wktZWeLQf hDEahgliOHsSRZeiyHCR55gdTOq4WSXL X-Google-Smtp-Source: ABdhPJxOe3b7ZZZO66aRyp7/6tF1PvbhvsmjAkilL5QfShvGV05olSOCMbfdC19YAO8HiqtMrSj5Ncy3i1Dh X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a62:1ad3:0:b0:4fa:686f:9938 with SMTP id a202-20020a621ad3000000b004fa686f9938mr21701293pfa.6.1647906536223; Mon, 21 Mar 2022 16:48:56 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:35 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-3-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 02/11] KVM: selftests: Dump VM stats in binary stats test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add kvm_util library functions to read KVM stats through the binary stats interface and then dump them to stdout when running the binary stats test. Subsequent commits will extend the kvm_util code and use it to make assertions in a test for NX hugepages. CC: Jing Zhang Signed-off-by: Ben Gardon --- .../selftests/kvm/include/kvm_util_base.h | 1 + .../selftests/kvm/kvm_binary_stats_test.c | 3 + tools/testing/selftests/kvm/lib/kvm_util.c | 143 ++++++++++++++++++ 3 files changed, 147 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 976aaaba8769..4783fd1cd4cf 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -401,6 +401,7 @@ void assert_on_unhandled_exception(struct kvm_vm *vm, uint32_t vcpuid); int vm_get_stats_fd(struct kvm_vm *vm); int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid); +void dump_vm_stats(struct kvm_vm *vm); uint32_t guest_get_vcpuid(void); diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c index 17f65d514915..afc4701ce8dd 100644 --- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c +++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c @@ -174,6 +174,9 @@ static void vm_stats_test(struct kvm_vm *vm) stats_test(stats_fd); close(stats_fd); TEST_ASSERT(fcntl(stats_fd, F_GETFD) == -1, "Stats fd not freed"); + + /* Dump VM stats */ + dump_vm_stats(vm); } static void vcpu_stats_test(struct kvm_vm *vm, int vcpu_id) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 11a692cf4570..f87df68b150d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2562,3 +2562,146 @@ int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid) return ioctl(vcpu->fd, KVM_GET_STATS_FD, NULL); } + +/* Caller is responsible for freeing the returned kvm_stats_header. */ +static struct kvm_stats_header *read_vm_stats_header(int stats_fd) +{ + struct kvm_stats_header *header; + ssize_t ret; + + /* Read kvm stats header */ + header = malloc(sizeof(*header)); + TEST_ASSERT(header, "Allocate memory for stats header"); + + ret = read(stats_fd, header, sizeof(*header)); + TEST_ASSERT(ret == sizeof(*header), "Read stats header"); + + return header; +} + +static void dump_header(int stats_fd, struct kvm_stats_header *header) +{ + ssize_t ret; + char *id; + + printf("flags: %u\n", header->flags); + printf("name size: %u\n", header->name_size); + printf("num_desc: %u\n", header->num_desc); + printf("id_offset: %u\n", header->id_offset); + printf("desc_offset: %u\n", header->desc_offset); + printf("data_offset: %u\n", header->data_offset); + + /* Read kvm stats id string */ + id = malloc(header->name_size); + TEST_ASSERT(id, "Allocate memory for id string"); + ret = pread(stats_fd, id, header->name_size, header->id_offset); + TEST_ASSERT(ret == header->name_size, "Read id string"); + + printf("id: %s\n", id); + + free(id); +} + +static ssize_t stats_desc_size(struct kvm_stats_header *header) +{ + return sizeof(struct kvm_stats_desc) + header->name_size; +} + +/* Caller is responsible for freeing the returned kvm_stats_desc. */ +static struct kvm_stats_desc *read_vm_stats_desc(int stats_fd, + struct kvm_stats_header *header) +{ + struct kvm_stats_desc *stats_desc; + size_t size_desc; + ssize_t ret; + + size_desc = header->num_desc * stats_desc_size(header); + + /* Allocate memory for stats descriptors */ + stats_desc = malloc(size_desc); + TEST_ASSERT(stats_desc, "Allocate memory for stats descriptors"); + + /* Read kvm stats descriptors */ + ret = pread(stats_fd, stats_desc, size_desc, header->desc_offset); + TEST_ASSERT(ret == size_desc, "Read KVM stats descriptors"); + + return stats_desc; +} + +/* Caller is responsible for freeing the memory *data. */ +static int read_stat_data(int stats_fd, struct kvm_stats_header *header, + struct kvm_stats_desc *desc, uint64_t **data) +{ + u64 *stats_data; + ssize_t ret; + + stats_data = malloc(desc->size * sizeof(*stats_data)); + + ret = pread(stats_fd, stats_data, desc->size * sizeof(*stats_data), + header->data_offset + desc->offset); + + /* ret is in bytes. */ + ret = ret / sizeof(*stats_data); + + TEST_ASSERT(ret == desc->size, + "Read data of KVM stats: %s", desc->name); + + *data = stats_data; + + return ret; +} + +static void dump_stat(int stats_fd, struct kvm_stats_header *header, + struct kvm_stats_desc *desc) +{ + u64 *stats_data; + ssize_t ret; + int i; + + printf("\tflags: %u\n", desc->flags); + printf("\texponent: %u\n", desc->exponent); + printf("\tsize: %u\n", desc->size); + printf("\toffset: %u\n", desc->offset); + printf("\tbucket_size: %u\n", desc->bucket_size); + printf("\tname: %s\n", (char *)&desc->name); + + ret = read_stat_data(stats_fd, header, desc, &stats_data); + + printf("\tdata: %lu", *stats_data); + for (i = 1; i < ret; i++) + printf(", %lu", *(stats_data + i)); + printf("\n\n"); + + free(stats_data); +} + +void dump_vm_stats(struct kvm_vm *vm) +{ + struct kvm_stats_desc *stats_desc; + struct kvm_stats_header *header; + struct kvm_stats_desc *desc; + size_t size_desc; + int stats_fd; + int i; + + stats_fd = vm_get_stats_fd(vm); + + header = read_vm_stats_header(stats_fd); + dump_header(stats_fd, header); + + stats_desc = read_vm_stats_desc(stats_fd, header); + + size_desc = stats_desc_size(header); + + /* Read kvm stats data one by one */ + for (i = 0; i < header->num_desc; ++i) { + desc = (void *)stats_desc + (i * size_desc); + dump_stat(stats_fd, header, desc); + } + + free(stats_desc); + free(header); + + close(stats_fd); +} + From patchwork Mon Mar 21 23:48:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4232C433EF for ; Mon, 21 Mar 2022 23:49:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233591AbiCUXvN (ORCPT ); Mon, 21 Mar 2022 19:51:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233558AbiCUXu5 (ORCPT ); Mon, 21 Mar 2022 19:50:57 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B7FA18CD31 for ; Mon, 21 Mar 2022 16:49:05 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id x9-20020a5b0809000000b00631d9edfb96so13137606ybp.22 for ; Mon, 21 Mar 2022 16:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vceLiNWOfgM+5bZss9O1SDm7c/HCij832g8/4XdcLhY=; b=gOUf58e0daRwmwx8DRK2eUSK5+zZf9GQPSJOfKC67oaYE3xePJFLGmdzaNCesrM9eE rEJPaBHcY1L6gbvJz/09zE+1rtp90gQsxFXPgpDGak3SWNV7Wds38W2rjqqpLe0hM56Y DDxgBag+C57xsgqih9yYR9isCG1GSWLkDrff1HMjU9JDb+DSq8Q2CtH4ppJl1dY85CLh 3SfIbroo9mTvlYS4CwdBB6yBzi7bgfGAcE+6JWeehG5eRR6ilb9al2Jqdy4Crv8alUsD KjQxttR8idAMjjKExSlWDaeXQfp2p3Cmh4lyvGy9ql/5wwCHX6YycP1/wAB6eHbOiRNT GF7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vceLiNWOfgM+5bZss9O1SDm7c/HCij832g8/4XdcLhY=; b=6lQjkitk8EvAKXnF27TsYck1bPV8A+xSfvvH1fpq4xzWJPNsJqYZc0Va/4QhnISYdH gRYAXa4PbC50mZ/cXIHkSMRr8u6noKNQL072Z7O9pI771p0TolUjPI/NPJtc84n8dpAb 89sXI2XssL+1Kz2ynwsQJfI/FH4UjKu1eAXNuXinN5BS3DbA4IJ/t0ykaE7RreCxG0il n+AHA/zDyf7VxF5IKGHilCEQ+rkreJ83qJXqwXsQTzmePEom6JNKDptXqmGvoI3F4WU5 8KJsRp0YXNweRjr/WR4VCbez5mG1qimB2EWi15nTCwwOJuv6O+ielQk4ycxVpSjgJkFd Hb0w== X-Gm-Message-State: AOAM5312VW5izmg3KUOmSM/asjfjKdtOcQ19SzJQAfsX2d2uLynDl2j+ AX58sxB9uirbc7YMmJw0/7yhdBzt0G8K X-Google-Smtp-Source: ABdhPJwp6mKWHvC5KkJiAGAUreZqPBolqE6R4CCzsrGMI3tmoNjZ8+PbBAjZu74u75MGdRTrAX4AySpVF8aC X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a81:784b:0:b0:2e5:9f35:a90f with SMTP id t72-20020a81784b000000b002e59f35a90fmr27076925ywc.278.1647906538729; Mon, 21 Mar 2022 16:48:58 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:36 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-4-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 03/11] KVM: selftests: Test reading a single stat From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Retrieve the value of a single stat by name in the binary stats test to ensure the kvm_util library functions work. CC: Jing Zhang Signed-off-by: Ben Gardon --- .../selftests/kvm/include/kvm_util_base.h | 1 + .../selftests/kvm/kvm_binary_stats_test.c | 3 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 53 +++++++++++++++++++ 3 files changed, 57 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 4783fd1cd4cf..78c4407f36b4 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -402,6 +402,7 @@ void assert_on_unhandled_exception(struct kvm_vm *vm, uint32_t vcpuid); int vm_get_stats_fd(struct kvm_vm *vm); int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid); void dump_vm_stats(struct kvm_vm *vm); +uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name); uint32_t guest_get_vcpuid(void); diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c index afc4701ce8dd..97bde355f105 100644 --- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c +++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c @@ -177,6 +177,9 @@ static void vm_stats_test(struct kvm_vm *vm) /* Dump VM stats */ dump_vm_stats(vm); + + /* Read a single stat. */ + printf("remote_tlb_flush: %lu\n", vm_get_single_stat(vm, "remote_tlb_flush")); } static void vcpu_stats_test(struct kvm_vm *vm, int vcpu_id) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index f87df68b150d..9c4574381daa 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2705,3 +2705,56 @@ void dump_vm_stats(struct kvm_vm *vm) close(stats_fd); } +static int vm_get_stat_data(struct kvm_vm *vm, const char *stat_name, + uint64_t **data) +{ + struct kvm_stats_desc *stats_desc; + struct kvm_stats_header *header; + struct kvm_stats_desc *desc; + size_t size_desc; + int stats_fd; + int ret = -EINVAL; + int i; + + *data = NULL; + + stats_fd = vm_get_stats_fd(vm); + + header = read_vm_stats_header(stats_fd); + + stats_desc = read_vm_stats_desc(stats_fd, header); + + size_desc = stats_desc_size(header); + + /* Read kvm stats data one by one */ + for (i = 0; i < header->num_desc; ++i) { + desc = (void *)stats_desc + (i * size_desc); + + if (strcmp(desc->name, stat_name)) + continue; + + ret = read_stat_data(stats_fd, header, desc, data); + } + + free(stats_desc); + free(header); + + close(stats_fd); + + return ret; +} + +uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name) +{ + uint64_t *data; + uint64_t value; + int ret; + + ret = vm_get_stat_data(vm, stat_name, &data); + TEST_ASSERT(ret == 1, "Stat %s expected to have 1 element, but has %d", + stat_name, ret); + value = *data; + free(data); + return value; +} + From patchwork Mon Mar 21 23:48:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 745D2C433F5 for ; Mon, 21 Mar 2022 23:49:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233740AbiCUXvP (ORCPT ); Mon, 21 Mar 2022 19:51:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233560AbiCUXvA (ORCPT ); Mon, 21 Mar 2022 19:51:00 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A493B3FBE4 for ; Mon, 21 Mar 2022 16:49:05 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id bv2-20020a17090af18200b001c63c69a774so7279494pjb.0 for ; Mon, 21 Mar 2022 16:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zR/oLtC0GUAkcb66SECoKHYkdEWkC3/rnB6c6Rd7ndg=; b=WZmkJflljx56j9wbepPRxYCb17aLNF10ZpXYJF50WF1LwvRiP91UXmo3rOORk+Z4BV BL1gU+xhgUnUW05zO8i/3yPQdVoPjnw+qRhPrZuw91KFHQFtNfkJiL3CpPOXeK3Ey3yu JDeun1xnCFWYCCgEDw9A6UsxXj1JeaUdI+OHPNngQOjmC4Opj+tZgHdmmZl1fMGeyUyI R9NQlP54wSvLtVxPqyqwgN0KKb2uxpp4DVUB59k7osJtnIgjFy7haY43Yyd2np7dlizQ 1ZoPQ7/D6Y0ZpTGP8D4sx+zEmP1o9s+YetPkqcSCJrlacSPypN/r2r7T/ucYzLY/udKa JLtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zR/oLtC0GUAkcb66SECoKHYkdEWkC3/rnB6c6Rd7ndg=; b=OWXeKimtjgu4f0/de+8CIUxzknUe3Uq1EHEcpvlG5Vigif1CWyT5NHB3T+c6ztiTt6 O3BcaLFLyR8G57ukTeNW7Ds0vNTbRtyXS662xs0QJ+CbuFTW1RaEJC/4lc7Qkg+rCAnd r1X6KeBxVkKHm9VaMj2oY+3fdvPaM8jwG9rMF1opC6Amsa10xO9bcf8J9C1hlI09RLMM Ui/LWTzdejBJIp/O+JB6kYc2PLO/EvmpjXOCsn1jQAFA80/RuWXgEpkm20RlU2QmjJXY /chZBsimRcdxS2ovgUQOLsaZOOOK03g+8gWrmwOPn1eA827RLpmnqOyHcj1TsQBcJur8 2qMQ== X-Gm-Message-State: AOAM530SmV+7kG2WT84v2KZHDLYgAbJQXZ7OKGrjb2YmEcM6A1DqBsMG v2MDPhPU7rEd4z93qm9qgIv0xh9Rl4cI X-Google-Smtp-Source: ABdhPJyaiAYmEDmaFmcPf0su7hitk/r/fdft/9II80Jr0Czi+zUbHkue2Dko842Ud6nQlWHLS+ALcPujkHy6 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a62:684:0:b0:4f7:803:d1b0 with SMTP id 126-20020a620684000000b004f70803d1b0mr26759515pfg.10.1647906541362; Mon, 21 Mar 2022 16:49:01 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:37 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-5-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 04/11] KVM: selftests: Add memslot parameter to elf_load From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently elf_load loads code into memslot 0. Add a parameter to allow loading code into any memslot. This will be useful for backing code pages with huge pages in future commits. No functional change intended. Signed-off-by: Ben Gardon --- .../testing/selftests/kvm/include/kvm_util_base.h | 5 +++++ tools/testing/selftests/kvm/lib/elf.c | 13 +++++++++++-- tools/testing/selftests/kvm/lib/kvm_util.c | 14 ++++++++++---- 3 files changed, 26 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 78c4407f36b4..72163ba2f878 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -122,7 +122,10 @@ uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm); int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva, size_t len); +void kvm_vm_elf_load_memslot(struct kvm_vm *vm, const char *filename, + uint32_t memslot); void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename); + int kvm_memfd_alloc(size_t size, bool hugepages); void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent); @@ -169,6 +172,8 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid); +vm_vaddr_t vm_vaddr_alloc_memslot(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, uint32_t memslot); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c index 13e8e3dcf984..899418e65f60 100644 --- a/tools/testing/selftests/kvm/lib/elf.c +++ b/tools/testing/selftests/kvm/lib/elf.c @@ -97,6 +97,7 @@ static void elfhdr_get(const char *filename, Elf64_Ehdr *hdrp) * * Input Args: * filename - Path to ELF file + * memslot - the memslot into which the elf should be loaded * * Output Args: None * @@ -111,7 +112,8 @@ static void elfhdr_get(const char *filename, Elf64_Ehdr *hdrp) * by the image and it needs to have sufficient available physical pages, to * back the virtual pages used to load the image. */ -void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) +void kvm_vm_elf_load_memslot(struct kvm_vm *vm, const char *filename, + uint32_t memslot) { off_t offset, offset_rv; Elf64_Ehdr hdr; @@ -162,7 +164,9 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) seg_vend |= vm->page_size - 1; size_t seg_size = seg_vend - seg_vstart + 1; - vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart); + vm_vaddr_t vaddr = vm_vaddr_alloc_memslot(vm, seg_size, + seg_vstart, + memslot); TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate " "virtual memory for segment at requested min addr,\n" " segment idx: %u\n" @@ -191,3 +195,8 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) } } } + +void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) +{ + kvm_vm_elf_load_memslot(vm, filename, 0); +} diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 9c4574381daa..09742a787546 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1336,8 +1336,7 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, * vm - Virtual Machine * sz - Size in bytes * vaddr_min - Minimum starting virtual address - * data_memslot - Memory region slot for data pages - * pgd_memslot - Memory region slot for new virtual translation tables + * memslot - Memory region slot for data pages * * Output Args: None * @@ -1350,13 +1349,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, * a unique set of pages, with the minimum real allocation being at least * a page. */ -vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +vm_vaddr_t vm_vaddr_alloc_memslot(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, uint32_t memslot) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); virt_pgd_alloc(vm); vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages, - KVM_UTIL_MIN_PFN * vm->page_size, 0); + KVM_UTIL_MIN_PFN * vm->page_size, + memslot); /* * Find an unused range of virtual page addresses of at least @@ -1377,6 +1378,11 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) return vaddr_start; } +vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +{ + return vm_vaddr_alloc_memslot(vm, sz, vaddr_min, 0); +} + /* * VM Virtual Address Allocate Pages * From patchwork Mon Mar 21 23:48:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 032BCC433EF for ; Mon, 21 Mar 2022 23:50:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233779AbiCUXvd (ORCPT ); Mon, 21 Mar 2022 19:51:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233530AbiCUXvJ (ORCPT ); Mon, 21 Mar 2022 19:51:09 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 531F7196082 for ; Mon, 21 Mar 2022 16:49:07 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2e643f85922so26975137b3.2 for ; Mon, 21 Mar 2022 16:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eaMpQz/2+gIGl1l/NaHHR9r8rELtroyMHUNp7EK/mzA=; b=jZYdyGLDT1rPNya0zYp8WpaU+7kOP6KdfhIGOZTmttYfnyNJYfBrtepboIJ5odwfrw /bkRPaq97VIIspJKNuKc7luPOnIcS0tV9iwcn9xP1ETSpPvv1m3pKyhPJPll3UysqUZy 13apm+c73YOVfmTCZGSkEnIID2JwLpUe3SNPXBTgn+pyLgl9Eo2v4InmS4VI0x5zWsyz S3RmYfjfHpSWZpKQQtm8v9rjfGPox7DtcBK0lvgS2QxrP4hQ19bE/ihH53tNnTy6dhFG jTn/LVRXjSC6RS81tHBse92oanoDwaJuqzH9s5V5nDtWXJvcN6RYPR99KNH4ekdPT98G JDJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eaMpQz/2+gIGl1l/NaHHR9r8rELtroyMHUNp7EK/mzA=; b=iRVRtpkLTdxcCaRfCUs2ZXexq8iKEwfYUCXV/3f/dT5KpP5gEur+t4fQ9J8ENt0VB+ QN10syeQgpP3VKIAqaOhqypPcQwhKWOnqWrqixcldaiytfAeTtUe22eIkz8iY/RWzRG3 95G2A9bba0/O1ufP1RDEn/9gJCBBnk2Y7h4EOQeNwRW9qH3bUufz/aXsL1jXHcV2Y57y DbV/uqf0wzzP4e8z53uCDQUl7uBD8I/UE2UZu/yKDGHkWJ+ADhsgvZ68b0oN9UuDOF6d euNDLRc4uEML9J0NJNBD7hxFIXpML3ewENZPcP1kVrfHqPRhvodlO5Uywu+TQG8y/bVE NRhQ== X-Gm-Message-State: AOAM530UNeIED8eRZdtwfyv7i4Tp8/B5urLSRnXED7mNL+q2GuZDMZ2x wrzXYthfwthdAUWva1hnWIngkBtDwk1e X-Google-Smtp-Source: ABdhPJwxJQuW8sN2Dk7+xF8QgGAHRKBKpRfkRsxXusPwZUYwUcL/DREHHbw6Wl/K/RBIHiR210AiJjavemar X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a05:690c:11b:b0:2e6:529:d6b2 with SMTP id bd27-20020a05690c011b00b002e60529d6b2mr12866359ywb.345.1647906543992; Mon, 21 Mar 2022 16:49:03 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:38 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-6-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 05/11] KVM: selftests: Improve error message in vm_phy_pages_alloc From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Make an error message in vm_phy_pages_alloc more specific, and log the number of pages requested in the allocation. Signed-off-by: Ben Gardon --- tools/testing/selftests/kvm/lib/kvm_util.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 09742a787546..9d72d1bb34fa 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2408,9 +2408,10 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, } while (pg && pg != base + num); if (pg == 0) { - fprintf(stderr, "No guest physical page available, " + fprintf(stderr, + "Unable to find %ld contiguous guest physical pages. " "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", - paddr_min, vm->page_size, memslot); + num, paddr_min, vm->page_size, memslot); fputs("---- vm dump ----\n", stderr); vm_dump(stderr, vm, 2); abort(); From patchwork Mon Mar 21 23:48:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFA3DC433EF for ; Mon, 21 Mar 2022 23:49:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233660AbiCUXvR (ORCPT ); Mon, 21 Mar 2022 19:51:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233621AbiCUXvB (ORCPT ); Mon, 21 Mar 2022 19:51:01 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1684A19531A for ; Mon, 21 Mar 2022 16:49:09 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id t10-20020a17090a5d8a00b001bed9556134so441163pji.5 for ; Mon, 21 Mar 2022 16:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Y0Z7P9nRL66wISNKsrl7X268BH6/ru8evUiqvHaQXfE=; b=K97JyKUsOBZvGipoVufacr+O9RLuZuUwXUoDMwajnjcSECbmiNDbAev+ehaK8eWKph ZTRiCUhiVBul6u9swZEvB15vqmDXAa/JBERDDFH4kyx7+XRYugCYjRGG1ZcAiqhxN5xS 4t+3leL9aU64TkBZSk0sk4nA9g0pbQ9vcxChgQFzWFCHMUYHnGzap8PGzjN4x6NNLZRg j+hKs07/RCq556zZqliqvcnuTexnhiH5v86wxt9yQQLDOizMIt+SJZxKG4VC7SBoMRpi HeCh/f+SdMRrNyPoLMqz8wJJNycrXAFMPUIXMiOfb8CE7JaiQAt0R8fzYgAfPHaGQozr 4hnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Y0Z7P9nRL66wISNKsrl7X268BH6/ru8evUiqvHaQXfE=; b=kPGrtwsK4tVgjXDZbrzTHiaGiJkyXUJeZHeH3794PBX2cT5SIebxnZIOq5T5hyPLOG Qotshdx41KhuBveI24wZNh2qS6ey3azqVs92sgMXpa01yowohlQXrdFSLDMmCL7/N9ua PSmK+YMQeKWgKGuzxjCI0M187YVTkC6L9dwouMwX76NOjwpgiUYDYTsa0aSgGJuNZ2NH J9YhgpFz0/TIojfVOpCk+IkpMXO/Xe+Ojhi2bz3OV7TojHiD46jNf/mkEQjnPBfGKiSl 0rIPZQBXMa+HbFDhA9QLSJB4UNhckiF+sVJgKwUj7fFgn57yFArKVBVAZEMI6p5MWzH4 rvbw== X-Gm-Message-State: AOAM533zoOolQSHOZEAvc14aHxiME1tFA3So6y5PvY3lbXdvpxDFhNn/ Ihi+yo7I5kPbqz/kN+3xl1eQ97oMG3zd X-Google-Smtp-Source: ABdhPJxGJZGarPgPniaJ61FSfOtAOWba0TrrU86Xs6HSGcUqGdV5PMfdEwvtg5BZaO/8MlobiOdmu2FaUSpo X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a05:6a00:885:b0:4f4:17d8:be31 with SMTP id q5-20020a056a00088500b004f417d8be31mr26387692pfj.57.1647906546898; Mon, 21 Mar 2022 16:49:06 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:39 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-7-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 06/11] KVM: selftests: Add NX huge pages test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There's currently no test coverage of NX hugepages in KVM selftests, so add a basic test to ensure that the feature works as intended. Reviewed-by: David Dunn Signed-off-by: Ben Gardon --- tools/testing/selftests/kvm/Makefile | 3 +- .../kvm/lib/x86_64/nx_huge_pages_guest.S | 45 ++++++ .../selftests/kvm/x86_64/nx_huge_pages_test.c | 133 ++++++++++++++++++ .../kvm/x86_64/nx_huge_pages_test.sh | 25 ++++ 4 files changed, 205 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S create mode 100644 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c create mode 100755 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 04099f453b59..6ee30c0df323 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -38,7 +38,7 @@ ifeq ($(ARCH),riscv) endif LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c -LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S +LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S lib/x86_64/nx_huge_pages_guest.S LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c LIBKVM_riscv = lib/riscv/processor.c lib/riscv/ucall.c @@ -56,6 +56,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/kvm_clock_test TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test TEST_GEN_PROGS_x86_64 += x86_64/mmu_role_test +TEST_GEN_PROGS_x86_64 += x86_64/nx_huge_pages_test TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id diff --git a/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S b/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S new file mode 100644 index 000000000000..09c66b9562a3 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * tools/testing/selftests/kvm/nx_huge_page_guest.S + * + * Copyright (C) 2022, Google LLC. + */ + +.include "kvm_util.h" + +#define HPAGE_SIZE (2*1024*1024) +#define PORT_SUCCESS 0x70 + +.global guest_code0 +.global guest_code1 + +.align HPAGE_SIZE +exit_vm: + mov $0x1,%edi + mov $0x2,%esi + mov a_string,%edx + mov $0x1,%ecx + xor %eax,%eax + jmp ucall + + +guest_code0: + mov data1, %eax + mov data2, %eax + jmp exit_vm + +.align HPAGE_SIZE +guest_code1: + mov data1, %eax + mov data2, %eax + jmp exit_vm +data1: +.quad 0 + +.align HPAGE_SIZE +data2: +.quad 0 +a_string: +.string "why does the ucall function take a string argument?" + + diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c new file mode 100644 index 000000000000..2bcbe4efdc6a --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c @@ -0,0 +1,133 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * tools/testing/selftests/kvm/nx_huge_page_test.c + * + * Usage: to be run via nx_huge_page_test.sh, which does the necessary + * environment setup and teardown + * + * Copyright (C) 2022, Google LLC. + */ + +#define _GNU_SOURCE + +#include +#include +#include + +#include +#include "kvm_util.h" + +#define HPAGE_SLOT 10 +#define HPAGE_PADDR_START (10*1024*1024) +#define HPAGE_SLOT_NPAGES (100*1024*1024/4096) + +/* Defined in nx_huge_page_guest.S */ +void guest_code0(void); +void guest_code1(void); + +static void run_guest_code(struct kvm_vm *vm, void (*guest_code)(void)) +{ + struct kvm_regs regs; + + vcpu_regs_get(vm, 0, ®s); + regs.rip = (uint64_t)guest_code; + vcpu_regs_set(vm, 0, ®s); + vcpu_run(vm, 0); +} + +static void check_2m_page_count(struct kvm_vm *vm, int expected_pages_2m) +{ + int actual_pages_2m; + + actual_pages_2m = vm_get_single_stat(vm, "pages_2m"); + + TEST_ASSERT(actual_pages_2m == expected_pages_2m, + "Unexpected 2m page count. Expected %d, got %d", + expected_pages_2m, actual_pages_2m); +} + +static void check_split_count(struct kvm_vm *vm, int expected_splits) +{ + int actual_splits; + + actual_splits = vm_get_single_stat(vm, "nx_lpage_splits"); + + TEST_ASSERT(actual_splits == expected_splits, + "Unexpected nx lpage split count. Expected %d, got %d", + expected_splits, actual_splits); +} + +int main(int argc, char **argv) +{ + struct kvm_vm *vm; + struct timespec ts; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB, + HPAGE_PADDR_START, HPAGE_SLOT, + HPAGE_SLOT_NPAGES, 0); + + kvm_vm_elf_load_memslot(vm, program_invocation_name, HPAGE_SLOT); + + vm_vcpu_add_default(vm, 0, guest_code0); + + check_2m_page_count(vm, 0); + check_split_count(vm, 0); + + /* + * Running guest_code0 will access data1 and data2. + * This should result in part of the huge page containing guest_code0, + * and part of the hugepage containing the ucall function being mapped + * at 4K. The huge pages containing data1 and data2 will be mapped + * at 2M. + */ + run_guest_code(vm, guest_code0); + check_2m_page_count(vm, 2); + check_split_count(vm, 2); + + /* + * guest_code1 is in the same huge page as data1, so it will cause + * that huge page to be remapped at 4k. + */ + run_guest_code(vm, guest_code1); + check_2m_page_count(vm, 1); + check_split_count(vm, 3); + + /* Run guest_code0 again to check that is has no effect. */ + run_guest_code(vm, guest_code0); + check_2m_page_count(vm, 1); + check_split_count(vm, 3); + + /* + * Give recovery thread time to run. The wrapper script sets + * recovery_period_ms to 100, so wait 1.5x that. + */ + ts.tv_sec = 0; + ts.tv_nsec = 150000000; + nanosleep(&ts, NULL); + + /* + * Now that the reclaimer has run, all the split pages should be gone. + */ + check_2m_page_count(vm, 1); + check_split_count(vm, 0); + + /* + * The split 2M pages should have been reclaimed, so run guest_code0 + * again to check that pages are mapped at 2M again. + */ + run_guest_code(vm, guest_code0); + check_2m_page_count(vm, 2); + check_split_count(vm, 2); + + /* Pages are once again split from running guest_code1. */ + run_guest_code(vm, guest_code1); + check_2m_page_count(vm, 1); + check_split_count(vm, 3); + + kvm_vm_free(vm); + + return 0; +} + diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh new file mode 100755 index 000000000000..19fc95723fcb --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh @@ -0,0 +1,25 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0-only */ + +# tools/testing/selftests/kvm/nx_huge_page_test.sh +# Copyright (C) 2022, Google LLC. + +NX_HUGE_PAGES=$(cat /sys/module/kvm/parameters/nx_huge_pages) +NX_HUGE_PAGES_RECOVERY_RATIO=$(cat /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio) +NX_HUGE_PAGES_RECOVERY_PERIOD=$(cat /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms) +HUGE_PAGES=$(cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages) + +echo 1 > /sys/module/kvm/parameters/nx_huge_pages +echo 1 > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio +echo 100 > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms +echo 200 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + +./nx_huge_pages_test +RET=$? + +echo $NX_HUGE_PAGES > /sys/module/kvm/parameters/nx_huge_pages +echo $NX_HUGE_PAGES_RECOVERY_RATIO > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio +echo $NX_HUGE_PAGES_RECOVERY_PERIOD > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms +echo $HUGE_PAGES > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + +exit $RET From patchwork Mon Mar 21 23:48:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3535C433EF for ; Mon, 21 Mar 2022 23:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233610AbiCUXv2 (ORCPT ); Mon, 21 Mar 2022 19:51:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233631AbiCUXvK (ORCPT ); Mon, 21 Mar 2022 19:51:10 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 818E01EA2AF for ; Mon, 21 Mar 2022 16:49:10 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o15-20020a17090aac0f00b001c6595a43dbso331623pjq.4 for ; Mon, 21 Mar 2022 16:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HTko+mjwSCY8/8jCHYIzXlXkhpUJFcRy2EV5oJp9m3Q=; b=I22SNpJTSp7u2DkXDp+DZcXecT0aJNWQsYrggepx4bKXKn5jFK8VNlPisgR9wW+gZi sypyACQIoThc8e16NmgO60qt1r5TdXaq4OBmHDr0wcQbVn8Ixvjf84zb0XZHVNOjcBqN L07M7jQJ3Dmzbwwr8ienEAHrv2B7U3dwrascuPvMrO3R+s2U10dHS0H0k/WRZu4aDo4Z xqqkSP7Mfc1F3HoUP89S8h+v7/104N7SfBKCakhJXwea8h7EaYHxn6B5fjTh+JEgAF/E CIDd/hyZ2/pOhkrneK9nUuF4MxB9VfkIxSMd4+4YS+yVFstDLqD2CXAcR0U71vwoPaxJ vcdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HTko+mjwSCY8/8jCHYIzXlXkhpUJFcRy2EV5oJp9m3Q=; b=0x1LnRC2Jf/nzwwh8zK0Tot81l4WGC0J3aSDsm3D1uHIqHH+kbqD+T49UXJVxdzY9T GkgEBMOP2Bf1BUJdCiaDs/vee8a4AA8sw39kV1kca04+cEFECa4SmiXyj+k/rIydGAId JBM0AiJskIt0wm8UbDrPhcvTdtYTdkU47sWVMVNDbtvN2yUJ04ffkZt7OMMZ2gDcZBhG ybt+gHMIHi9/5pMH+H3A6auRR88Mwot2Rl89Dec9CkoMfq6XNjLkztqwIQAWcuUMM3dP +m+fov9eGD7DM0JyAMAVRdR+kpOsOBbKzXJBDERoNCwzEHI7mt2Peh2poqn1a7jI4t+0 QF9w== X-Gm-Message-State: AOAM530HOXiaEuHXAvbKgljGsZBi0zVKp3DnnbunXYFvJ/SUWD06gHkG u0g5TA5MWWk503bZFJVpHVXb+H/z/h4C X-Google-Smtp-Source: ABdhPJwFjw1Jalvb5hnttuiMTpyg11w5RmyTkRJZ7KAWW+wsSDGUQYl/Em4/xpRB5TScQjoOEihWnCIvol4p X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a05:6a00:238f:b0:4f7:78b1:2f6b with SMTP id f15-20020a056a00238f00b004f778b12f6bmr26325911pfc.17.1647906549813; Mon, 21 Mar 2022 16:49:09 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:40 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-8-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 07/11] KVM: x86/MMU: Factor out updating NX hugepages state for a VM From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out the code to update the NX hugepages state for an individual VM. This will be expanded in future commits to allow per-VM control of Nx hugepages. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3b8da8b0745e..1b59b56642f1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6195,6 +6195,15 @@ static void __set_nx_huge_pages(bool val) nx_huge_pages = itlb_multihit_kvm_mitigation = val; } +static int kvm_update_nx_huge_pages(struct kvm *kvm) +{ + mutex_lock(&kvm->slots_lock); + kvm_mmu_zap_all_fast(kvm); + mutex_unlock(&kvm->slots_lock); + + wake_up_process(kvm->arch.nx_lpage_recovery_thread); +} + static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) { bool old_val = nx_huge_pages; @@ -6217,13 +6226,8 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) mutex_lock(&kvm_lock); - list_for_each_entry(kvm, &vm_list, vm_list) { - mutex_lock(&kvm->slots_lock); - kvm_mmu_zap_all_fast(kvm); - mutex_unlock(&kvm->slots_lock); - - wake_up_process(kvm->arch.nx_lpage_recovery_thread); - } + list_for_each_entry(kvm, &vm_list, vm_list) + kvm_set_nx_huge_pages(kvm); mutex_unlock(&kvm_lock); } From patchwork Mon Mar 21 23:48:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 432F8C433EF for ; Mon, 21 Mar 2022 23:50:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233573AbiCUXvZ (ORCPT ); Mon, 21 Mar 2022 19:51:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233639AbiCUXvL (ORCPT ); Mon, 21 Mar 2022 19:51:11 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2714B443EC for ; Mon, 21 Mar 2022 16:49:14 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id lr15-20020a17090b4b8f00b001c646e432baso449289pjb.3 for ; Mon, 21 Mar 2022 16:49:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=szpG0Mm9FMdvFwz3NhhJaIKXOFqL/zHqiXGF7HeZjrE=; b=OkUKZlv//JNbP6fmcu8cVepqQ9021rUq1lc/GY6Vy+gohxJPLn7IFhI7Zkl4krjLnb OCeX4P/XUyouNGjewJCv2HPboaFJz0+ihgZKOGnRNbTCVlqq4mHghODautQsDKeHXBnD NyG0ACnUKkkYXejqkhjQipJa4dGY6DGLEVP3+yowRqyt++Q9X49wjFvWuLv2zvoo9L95 I5DkWF8y3qBYg7e0eNTJLHgieLuWqjknAI0AYUkmh75pWlCOExv4vhy7yQt03aSD3CNZ pg03vKIAnuco18aaIEMWZ3ZJSaUN9lAcmW+AdZ4mzhnwIPYLrVjPIKOZ82g6D/br77gh alXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=szpG0Mm9FMdvFwz3NhhJaIKXOFqL/zHqiXGF7HeZjrE=; b=tSauoPpdspJ5nC1yAb3TaREC92fhas8LE7ida4wn4pp1VjBx/MTR2rW/fRFDUPGs0k elZAtp8C0PEb+hKPFeL9f9tdKNlY1hbAvWxtNgpAlM2eC2L4kaI5fgyiH5/NOkr3Ds0G rm8+bUaZk0McuzYXJIzKfEUgFV+6kpcx/Cva2mEzxvPI+W4/I47NBYCgGSN48Mg4kuIl 7GXqCLEQ6QTte5IAumDQXB+i+2St1powq+rbcp+xVtvwDAsctXIDdSf9egtEVS8xFGqm dWhxjCnf+GzSWFHwuR+PT7X7bHJRgzo0f1jePx4qrG4sgsssFeTW6MlmuO21yQDsMFzh QySQ== X-Gm-Message-State: AOAM533k1DdrLakAU5phPmiP9MJMdQDACiyGr/iAcnPQ6Sre7o7E+FRP 2/PkfvA5yXIbGX8SKjCnmgBse6nRjNfC X-Google-Smtp-Source: ABdhPJy7c36v2rxta8QcsartCJrjkGqgCq+BWTyX6Mp78w84zEfKPZZKyUCFOJuwgqOxJyVFdkc21L+0NWBT X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a17:90b:e81:b0:1c6:5a9c:5afa with SMTP id fv1-20020a17090b0e8100b001c65a9c5afamr204524pjb.1.1647906552600; Mon, 21 Mar 2022 16:49:12 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:41 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-9-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 08/11] KVM: x86/MMU: Track NX hugepages on a per-VM basis From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Track whether NX hugepages are enabled on a per-VM basis instead of as a host-wide setting. With this commit, the per-VM state will always be the same as the host-wide setting, but in future commits, it will be allowed to differ. No functional change intended. Signed-off-by: Ben Gardon Reviewed-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 7 +++++-- arch/x86/kvm/mmu/spte.c | 7 ++++--- arch/x86/kvm/mmu/spte.h | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- 6 files changed, 19 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f72e80178ffc..0a0c54639dd8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1240,6 +1240,8 @@ struct kvm_arch { hpa_t hv_root_tdp; spinlock_t hv_root_tdp_lock; #endif + + bool nx_huge_pages; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index bf8dbc4bb12a..dd28fe8d13ae 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -173,9 +173,9 @@ struct kvm_page_fault { int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); extern int nx_huge_pages; -static inline bool is_nx_huge_page_enabled(void) +static inline bool is_nx_huge_page_enabled(struct kvm *kvm) { - return READ_ONCE(nx_huge_pages); + return READ_ONCE(kvm->arch.nx_huge_pages); } static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, @@ -191,8 +191,8 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .user = err & PFERR_USER_MASK, .prefetch = prefetch, .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), - .nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(), - + .nx_huge_page_workaround_enabled = + is_nx_huge_page_enabled(vcpu->kvm), .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1b59b56642f1..dc9672f70468 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6195,8 +6195,10 @@ static void __set_nx_huge_pages(bool val) nx_huge_pages = itlb_multihit_kvm_mitigation = val; } -static int kvm_update_nx_huge_pages(struct kvm *kvm) +static void kvm_update_nx_huge_pages(struct kvm *kvm) { + kvm->arch.nx_huge_pages = nx_huge_pages; + mutex_lock(&kvm->slots_lock); kvm_mmu_zap_all_fast(kvm); mutex_unlock(&kvm->slots_lock); @@ -6227,7 +6229,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) mutex_lock(&kvm_lock); list_for_each_entry(kvm, &vm_list, vm_list) - kvm_set_nx_huge_pages(kvm); + kvm_update_nx_huge_pages(kvm); mutex_unlock(&kvm_lock); } @@ -6448,6 +6450,7 @@ int kvm_mmu_post_init_vm(struct kvm *kvm) { int err; + kvm->arch.nx_huge_pages = READ_ONCE(nx_huge_pages); err = kvm_vm_create_worker_thread(kvm, kvm_nx_lpage_recovery_worker, 0, "kvm-nx-lpage-recovery", &kvm->arch.nx_lpage_recovery_thread); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4739b53c9734..877ad30bc7ad 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -116,7 +116,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, spte |= spte_shadow_accessed_mask(spte); if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) && - is_nx_huge_page_enabled()) { + is_nx_huge_page_enabled(vcpu->kvm)) { pte_access &= ~ACC_EXEC_MASK; } @@ -215,7 +215,8 @@ static u64 make_spte_executable(u64 spte) * This is used during huge page splitting to build the SPTEs that make up the * new page table. */ -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) +u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, int huge_level, + int index) { u64 child_spte; int child_level; @@ -243,7 +244,7 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) * When splitting to a 4K page, mark the page executable as the * NX hugepage mitigation no longer applies. */ - if (is_nx_huge_page_enabled()) + if (is_nx_huge_page_enabled(kvm)) child_spte = make_spte_executable(child_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 73f12615416f..e4142caff4b1 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -415,7 +415,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte); -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index); +u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, int huge_level, + int index); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index af60922906ef..98a45a87f0b2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1466,7 +1466,8 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < PT64_ENT_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i); + sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte, + level, i); /* * Replace the huge spte with a pointer to the populated lower level From patchwork Mon Mar 21 23:48:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67C62C433F5 for ; Mon, 21 Mar 2022 23:49:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233646AbiCUXvT (ORCPT ); Mon, 21 Mar 2022 19:51:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233656AbiCUXvM (ORCPT ); Mon, 21 Mar 2022 19:51:12 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76BA61959FD for ; Mon, 21 Mar 2022 16:49:16 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id ob7-20020a17090b390700b001c692ec6de4so437357pjb.7 for ; Mon, 21 Mar 2022 16:49:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3m1W1T75EhNPFRfjof7HoQxGZ4zdcLUtCtOcDM/n6y4=; b=bH7mF2kTNI6R9mqs/j3WnNRaTz53LMn3cG+XFNKKsojz7Erkkcrw6O4LzU4qaDoUmk U6emSh7jsHYg0cKvkSk5657wzZvuGxwV9P24j10++KR+L8N4dYhXpUUtHYi2Wn1DDw5k mTet9eA6qZ/EByyry2C8ig+YydTCvmKnPpDQvm4om2lhvE27JMCTarC5Q0d9z+kz2i/K 9RCLcCCXXqsixdkDoSSzyk2xnX15MbqopTzGwi19TW5JyJBgOtNRCztMatkBVPm8mvAP 8uPdYNBbsqDCoyMfQ1CzSP7R0WdHm44rWvN7zRXLjVNY13zAmxgX5JU5r4D0hQS8WNaZ rWBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3m1W1T75EhNPFRfjof7HoQxGZ4zdcLUtCtOcDM/n6y4=; b=siqafQAr2G/ZBaqmh67i0pMLLSuOmSCUebljb5vUvHba8nkR5CiZMBLlKnlzbOh384 3Ffv05f3y8RRPZgiynVVpLnl4iBEOisx7xr6u54CFq0hHRsYsPaEiL1gIcoGXp8xMG4z sWdAxgZEhgE/ZYcxlT7+D9jRhKLmjQ32TKlrBTKBA8pidfJflLuqOI5sSpzaF52DDgOp OpRyYuyqmeLKdUSrqCDhDOzTSy+OxD1Hkw3Cdtu70m2Il6E3y+YWVO9IKzI+y66p7ot9 sx5f+NkuLlJnlgQS1lDSjn723VZKqFNb1cIDbLDg6uELIhv+7/d7eDchIM/SshrGAP1N AY2w== X-Gm-Message-State: AOAM531FbPRzN5fpgZ0F22rsFvQPlfDQ5cVWLbGCtii4c13NaZlZYE6W 0efQNn5UGUSsZVpHlM5RsB9oOT0d+t28 X-Google-Smtp-Source: ABdhPJwYF1kawNqVseiO7m27DBt/cqs+ne+QGTzIFChBVKDCqpuIaX8cdJTaoOjN8Cv37L+XEUkjHTj8iQJH X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a17:90a:1b2e:b0:1c6:d5ed:3ca4 with SMTP id q43-20020a17090a1b2e00b001c6d5ed3ca4mr1678022pjq.171.1647906555633; Mon, 21 Mar 2022 16:49:15 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:42 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-10-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 09/11] KVM: x86/MMU: Allow NX huge pages to be disabled on a per-vm basis From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In some cases, the NX hugepage mitigation for iTLB multihit is not needed for all guests on a host. Allow disabling the mitigation on a per-VM basis to avoid the performance hit of NX hugepages on trusted workloads. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.h | 1 + arch/x86/kvm/mmu/mmu.c | 6 ++++-- arch/x86/kvm/x86.c | 6 ++++++ include/uapi/linux/kvm.h | 1 + 5 files changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0a0c54639dd8..04ddfc475ce0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1242,6 +1242,7 @@ struct kvm_arch { #endif bool nx_huge_pages; + bool disable_nx_huge_pages; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index dd28fe8d13ae..36d8d84ca6c6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -177,6 +177,7 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm) { return READ_ONCE(kvm->arch.nx_huge_pages); } +void kvm_update_nx_huge_pages(struct kvm *kvm); static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dc9672f70468..a7d387ccfd74 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6195,9 +6195,10 @@ static void __set_nx_huge_pages(bool val) nx_huge_pages = itlb_multihit_kvm_mitigation = val; } -static void kvm_update_nx_huge_pages(struct kvm *kvm) +void kvm_update_nx_huge_pages(struct kvm *kvm) { - kvm->arch.nx_huge_pages = nx_huge_pages; + kvm->arch.nx_huge_pages = nx_huge_pages && + !kvm->arch.disable_nx_huge_pages; mutex_lock(&kvm->slots_lock); kvm_mmu_zap_all_fast(kvm); @@ -6451,6 +6452,7 @@ int kvm_mmu_post_init_vm(struct kvm *kvm) int err; kvm->arch.nx_huge_pages = READ_ONCE(nx_huge_pages); + kvm->arch.disable_nx_huge_pages = false; err = kvm_vm_create_worker_thread(kvm, kvm_nx_lpage_recovery_worker, 0, "kvm-nx-lpage-recovery", &kvm->arch.nx_lpage_recovery_thread); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 51106d32f04e..73df90a6932b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4256,6 +4256,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SYS_ATTRIBUTES: case KVM_CAP_VAPIC: case KVM_CAP_ENABLE_CAP: + case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: r = 1; break; case KVM_CAP_EXIT_HYPERCALL: @@ -6048,6 +6049,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, } mutex_unlock(&kvm->lock); break; + case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: + kvm->arch.disable_nx_huge_pages = true; + kvm_update_nx_huge_pages(kvm); + r = 0; + break; default: r = -EINVAL; break; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index ee5cc9e2a837..6f9fa7ecfd1e 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_MEM_OP_EXTENSION 211 #define KVM_CAP_PMU_CAPABILITY 212 #define KVM_CAP_DISABLE_QUIRKS2 213 +#define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 214 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Mon Mar 21 23:48:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9032C433FE for ; Mon, 21 Mar 2022 23:49:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233682AbiCUXvV (ORCPT ); Mon, 21 Mar 2022 19:51:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233676AbiCUXvN (ORCPT ); Mon, 21 Mar 2022 19:51:13 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6072818F23E for ; Mon, 21 Mar 2022 16:49:19 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id i8-20020a056a00004800b004fa5a5ecc4bso384883pfk.16 for ; Mon, 21 Mar 2022 16:49:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=03AtXXduXg1zb1Lu6FvRz8gaF3DklY9yvpB3ES+4Dvo=; b=Q1xUUUmlqEpBGcIC6H+wXtC8Gr2rWlrSz7NI5xNmSE0uvZIhuVGBn3UBiUKZBNEP3y uVJDWwuVe+43rOQ4ih/Q02ttE3rAuFeZ09lxezOVwWDDfJ/49M59H936VvqCqVYdbPMz kEr69NpA4DkdxnFB5YyruXcZUqGqEGBmB45uzV/etPNq43icb12Av7RpQmZLXgbm9A9d 8uKXZ3Ng66TKGPqurVnVl/3Q/ufsUqWlXF1Hij+3jE1dOt3pcaa68W2Z/fgm6VNv7jnh RwqSAAEAJ/T9NOGtFFtqaoreplYfA45saZWQEXhVN28LoX1yyKAGp9P24yQf0lLNQt6j kjAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=03AtXXduXg1zb1Lu6FvRz8gaF3DklY9yvpB3ES+4Dvo=; b=l0xSDpKHzafHTjb7v5SzLzQcdmckMvc8crGzdASFI0L1jFdsEBZqxDS6XqB1NTtQFc nAsOb2RAFbrppVGPVgiJLxlD2vkSQ5uN6mk75AjabAvjoSnBw3LQ/TOlQ5EFSxrwIY4G L9BDw875l0qNPz/z2Zdi4j5Kc2n+3fVQhLJ9S1Zrhj3fPe4umrAaU/9uflqNoQ3zDj/a AC/NBbyD5eUQeKRBJV4JeSTqNbk1C448UMxzGjMR9rNtolpk+JPkkdnG74W6B88guDiC GLdUz44lRwvDqtOlgvQz7KJsVXYH5ISvSc1U2PfbLAcmwP6LL4KYoa1dvlFjl7HuydbO OWXQ== X-Gm-Message-State: AOAM530OK9d+Iuhv4+7cPhnT7qgH+9M/mMibLzwj/kuCXOpjd0fAyEdp sdzdVF0MgAuHrLxxoSs+aTxVZ61vY9c0 X-Google-Smtp-Source: ABdhPJyku+3fzcf4ntA+Tu3Q+OdzLeoDGPq6egXk8Edj0z6jYxTHXMgyqm99IUFx4je5oaBLEjfaCc7A1/nJ X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:aa7:943a:0:b0:4f6:adc9:d741 with SMTP id y26-20020aa7943a000000b004f6adc9d741mr25936431pfo.30.1647906558205; Mon, 21 Mar 2022 16:49:18 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:43 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-11-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 10/11] KVM: x86: Fix errant brace in KVM capability handling From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The braces around the KVM_CAP_XSAVE2 block also surround the KVM_CAP_PMU_CAPABILITY block, likely the result of a merge issue. Simply move the curly brace back to where it belongs. Fixes: ba7bb663f5547 ("KVM: x86: Provide per VM capability for disabling PMU virtualization") Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 73df90a6932b..74351cbb9b5b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4352,10 +4352,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) if (r < sizeof(struct kvm_xsave)) r = sizeof(struct kvm_xsave); break; + } case KVM_CAP_PMU_CAPABILITY: r = enable_pmu ? KVM_CAP_PMU_VALID_MASK : 0; break; - } case KVM_CAP_DISABLE_QUIRKS2: r = KVM_X86_VALID_QUIRKS; break; From patchwork Mon Mar 21 23:48:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12787958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 603B6C433FE for ; Mon, 21 Mar 2022 23:49:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233687AbiCUXvW (ORCPT ); Mon, 21 Mar 2022 19:51:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233692AbiCUXvO (ORCPT ); Mon, 21 Mar 2022 19:51:14 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC020195D8A for ; Mon, 21 Mar 2022 16:49:21 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id j1-20020a170903028100b0014b1f9e0068so6234635plr.8 for ; Mon, 21 Mar 2022 16:49:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=m5T3Hbrl4DyDMOn7A4MiOSbjCSgOB5xSxPj2asSn9Wg=; b=Zacml26OoL/GF6NlL2gXG5FXZUDRoVGueOLiN6hKDPI9ASu/bkCxXaoxGt8/F0Mh5J FWAq8pUohHlkR4XmRKhyQX+XZq5v4u5JIPCUzw9V+zwzW3CD09XBZejqxuTyU+Ii6p5B h+Y6Mgnp/2n0gVU4Y4TPVDo9iP7NTj1BSTk5eHfqbtDS08cJpEdPHNTIFJhzeaylgWGK HCBBpc9KmVs32VdzG55JHsK/Ub+SjtOb9RQrJSwQiVk1whsye2wBJCWaosKcrBzNQroP kwSzod88EmWiLZ9h8zLQyjncwJB9lHtFgRayQOIFH24ZZj+dulNmO9RI7Mi8JwBXLPiG qoOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=m5T3Hbrl4DyDMOn7A4MiOSbjCSgOB5xSxPj2asSn9Wg=; b=xnmpUA7Zoq44nbrOgqOqycBJOTwyfaPOt6d8gzFbP1LHgJtunIoYnMDUDZ7ywt98kI NMSxDWbYYleEbHQ0fE9O92rcfI0IXSXw4gqKYSgy/kYbngg7avvMoPb/EbZmTf/ajR8Z HEfACHSlAqCL5LeLKSTDdXF5gGuWAlvj4YyfEn2CNEMsv4NYJ8LKmd844nYKj05IhDZ+ okxTvAUkyq3yTRN8ISF7bXKQqQfRPscD70KArAueLRjbPOfpr6zEYjTL3zyZ7uonuZDq w4Dz/LPVNRpgyJr664rp+Ee2EI6ZJBmJiE+OONse1LKivwwSmHxX9fJ4HVgpsFvvxL90 UtvQ== X-Gm-Message-State: AOAM530mIclMN0y+KryagLprVLitxo/LxuIT6S4wX7yQqg4gAO4aUBJx V4vCYVTQR+5PM4foi6ROrS7vVuoJOnaZ X-Google-Smtp-Source: ABdhPJxiNgqjZsgRsl/JaT3z6xvFFNLrDtQigGPFZbUOauwKxglp+Z1PfpkQGMLQS016z4FSanVYNBEijQ/w X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:b76a:f152:cb5e:5cd2]) (user=bgardon job=sendgmr) by 2002:a17:90b:e81:b0:1c6:5a9c:5afa with SMTP id fv1-20020a17090b0e8100b001c65a9c5afamr204556pjb.1.1647906560754; Mon, 21 Mar 2022 16:49:20 -0700 (PDT) Date: Mon, 21 Mar 2022 16:48:44 -0700 In-Reply-To: <20220321234844.1543161-1-bgardon@google.com> Message-Id: <20220321234844.1543161-12-bgardon@google.com> Mime-Version: 1.0 References: <20220321234844.1543161-1-bgardon@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 11/11] KVM: x86/MMU: Require reboot permission to disable NX hugepages From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Ensure that the userspace actor attempting to disable NX hugepages has permission to reboot the system. Since disabling NX hugepages would allow a guest to crash the system, it is similar to reboot permissions. This approach is the simplest permission gating, but passing a file descriptor opened for write for the module parameter would also work well and be more precise. The latter approach was suggested by Sean Christopherson. Suggested-by: Jim Mattson Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 18 ++++++- .../selftests/kvm/include/kvm_util_base.h | 2 + tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++ .../selftests/kvm/x86_64/nx_huge_pages_test.c | 49 ++++++++++++++----- .../kvm/x86_64/nx_huge_pages_test.sh | 2 +- 5 files changed, 65 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 74351cbb9b5b..995f30667619 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4256,7 +4256,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SYS_ATTRIBUTES: case KVM_CAP_VAPIC: case KVM_CAP_ENABLE_CAP: - case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: r = 1; break; case KVM_CAP_EXIT_HYPERCALL: @@ -4359,6 +4358,14 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_DISABLE_QUIRKS2: r = KVM_X86_VALID_QUIRKS; break; + case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: + /* + * Since the risk of disabling NX hugepages is a guest crashing + * the system, ensure the userspace process has permission to + * reboot the system. + */ + r = capable(CAP_SYS_BOOT); + break; default: break; } @@ -6050,6 +6057,15 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, mutex_unlock(&kvm->lock); break; case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: + /* + * Since the risk of disabling NX hugepages is a guest crashing + * the system, ensure the userspace process has permission to + * reboot the system. + */ + if (!capable(CAP_SYS_BOOT)) { + r = -EPERM; + break; + } kvm->arch.disable_nx_huge_pages = true; kvm_update_nx_huge_pages(kvm); r = 0; diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 72163ba2f878..4db8251c3ce5 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -411,4 +411,6 @@ uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name); uint32_t guest_get_vcpuid(void); +void vm_disable_nx_huge_pages(struct kvm_vm *vm); + #endif /* SELFTEST_KVM_UTIL_BASE_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 9d72d1bb34fa..46a7fa08d3e0 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2765,3 +2765,10 @@ uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name) return value; } +void vm_disable_nx_huge_pages(struct kvm_vm *vm) +{ + struct kvm_enable_cap cap = { 0 }; + + cap.cap = KVM_CAP_VM_DISABLE_NX_HUGE_PAGES; + vm_enable_cap(vm, &cap); +} diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c index 2bcbe4efdc6a..5ce98f759bc8 100644 --- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c @@ -57,13 +57,40 @@ static void check_split_count(struct kvm_vm *vm, int expected_splits) expected_splits, actual_splits); } +static void help(void) +{ + puts(""); + printf("usage: nx_huge_pages_test.sh [-x]\n"); + puts(""); + printf(" -x: Allow executable huge pages on the VM.\n"); + puts(""); + exit(0); +} + int main(int argc, char **argv) { struct kvm_vm *vm; struct timespec ts; + bool disable_nx = false; + int opt; + + while ((opt = getopt(argc, argv, "x")) != -1) { + switch (opt) { + case 'x': + disable_nx = true; + break; + case 'h': + default: + help(); + break; + } + } vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + if (disable_nx) + vm_disable_nx_huge_pages(vm); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB, HPAGE_PADDR_START, HPAGE_SLOT, HPAGE_SLOT_NPAGES, 0); @@ -83,21 +110,21 @@ int main(int argc, char **argv) * at 2M. */ run_guest_code(vm, guest_code0); - check_2m_page_count(vm, 2); - check_split_count(vm, 2); + check_2m_page_count(vm, disable_nx ? 4 : 2); + check_split_count(vm, disable_nx ? 0 : 2); /* * guest_code1 is in the same huge page as data1, so it will cause * that huge page to be remapped at 4k. */ run_guest_code(vm, guest_code1); - check_2m_page_count(vm, 1); - check_split_count(vm, 3); + check_2m_page_count(vm, disable_nx ? 4 : 1); + check_split_count(vm, disable_nx ? 0 : 3); /* Run guest_code0 again to check that is has no effect. */ run_guest_code(vm, guest_code0); - check_2m_page_count(vm, 1); - check_split_count(vm, 3); + check_2m_page_count(vm, disable_nx ? 4 : 1); + check_split_count(vm, disable_nx ? 0 : 3); /* * Give recovery thread time to run. The wrapper script sets @@ -110,7 +137,7 @@ int main(int argc, char **argv) /* * Now that the reclaimer has run, all the split pages should be gone. */ - check_2m_page_count(vm, 1); + check_2m_page_count(vm, disable_nx ? 4 : 1); check_split_count(vm, 0); /* @@ -118,13 +145,13 @@ int main(int argc, char **argv) * again to check that pages are mapped at 2M again. */ run_guest_code(vm, guest_code0); - check_2m_page_count(vm, 2); - check_split_count(vm, 2); + check_2m_page_count(vm, disable_nx ? 4 : 2); + check_split_count(vm, disable_nx ? 0 : 2); /* Pages are once again split from running guest_code1. */ run_guest_code(vm, guest_code1); - check_2m_page_count(vm, 1); - check_split_count(vm, 3); + check_2m_page_count(vm, disable_nx ? 4 : 1); + check_split_count(vm, disable_nx ? 0 : 3); kvm_vm_free(vm); diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh index 19fc95723fcb..29f999f48848 100755 --- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh @@ -14,7 +14,7 @@ echo 1 > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio echo 100 > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms echo 200 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages -./nx_huge_pages_test +./nx_huge_pages_test "${@}" RET=$? echo $NX_HUGE_PAGES > /sys/module/kvm/parameters/nx_huge_pages