From patchwork Wed Oct 31 13:26:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Orr X-Patchwork-Id: 10662685 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA77714DE for ; Wed, 31 Oct 2018 13:26:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB81F284BD for ; Wed, 31 Oct 2018 13:26:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE71A28AA4; Wed, 31 Oct 2018 13:26:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 29E9F284BD for ; Wed, 31 Oct 2018 13:26:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 592BF6B026E; Wed, 31 Oct 2018 09:26:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4F4C76B026F; Wed, 31 Oct 2018 09:26:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34BC76B0271; Wed, 31 Oct 2018 09:26:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f200.google.com (mail-oi1-f200.google.com [209.85.167.200]) by kanga.kvack.org (Postfix) with ESMTP id 00C366B026E for ; Wed, 31 Oct 2018 09:26:48 -0400 (EDT) Received: by mail-oi1-f200.google.com with SMTP id j192-v6so7601207oih.11 for ; Wed, 31 Oct 2018 06:26:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=+kKh85pxhdnY1kL+7UzqrV5xVyad1ppPxeuCWtpIWyw=; b=bDJ7zIgkI3MpCuqnMNMH/AoKDz3VDoHQTLS7PlroxA/cUCO08tjWeXTr4GXERYX2ug 4EATNC1YorPxciW9FH/3O3NXBTB20+sZN7defrbWg8zGZuwHQ0GrITfztUAEJ5CqENgm 1c0yxYleJfL8T8/H0J4DVUhirS3PcaaiEYL/mem7QBymUC+lJogmPmY74FaXVxTFuA4E uUvc40WAlyBLWO2f1Gf5rPqQnrZz7PlYcH8ejY6/1OvgRwzAXQjEALWxHr8FBcTDLe5b yJ60UokTbzI80zEaUfkX7TAwlN6Is3Rbqdg9u2HlpmrauhJNB+JY7zn9gQnjcKmkf4nt Bojg== X-Gm-Message-State: AGRZ1gLOe3b2mpO0adFYSGAOOTYGPvMy3DCO99m5lKj8jQP3u8eH11l6 5f3hhDeUcgh7ClrkHnRtsq6vxOrYX5j/GXnj3pj0WZpbfD4wWeZylu+/eqA9xPTR3hzQowWU3Nx cKLnKly2rHbFIJ8++3LA53CA/AiBje544Rpbjdly5zN3Eu91pTrTmmUHN8CZb74YyjjQNhu4ABA zcl5cqLpc6dxEOtSrsLQ/+AfMmfSm7kUFazDyiJ7oRAEr3jFjceVvz/HkjcxrOAEIaT8Pv38w46 od8UYvi2jZmBLr9WkkER6dlduVUTk4gjyorS1rITxl+VnjoQKRkwyDRjgXFkutakgP3Wf2wjTme xAJf44jnVFqUYMT6cVKHKgeUpmUlHxR+glk3ACRWb0cSdvUMpPyZkLtS2xMy2MsCr3PvYpH1MHN A X-Received: by 2002:a9d:60cf:: with SMTP id b15mr1801557otk.144.1540992407687; Wed, 31 Oct 2018 06:26:47 -0700 (PDT) X-Received: by 2002:a9d:60cf:: with SMTP id b15mr1801527otk.144.1540992406755; Wed, 31 Oct 2018 06:26:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540992406; cv=none; d=google.com; s=arc-20160816; b=Lc/4y+4JRC1U9b3x1jgu9akTE/ISa2q1tE5uaoghR5FK5JCxS6gn7752zTZLcOQv7J Hrx7flunxvtCpFP1krjabgNY3ERfPo2p5nEcKLiyfBVK6QWtkjOx7hNYwgU+UVsQ5TJV 6rqo16HAn0yfqRX7MOX/baVEZPND7+tqs7NEgcAv6BBi8DUk2cbqFP3TgvIcgwTcn6AN Yf7X272NSXBVmJ9yiC5x94aky0wQeAPRafkdnfx3zPWUy7MDnMccx5z8hylQP3gwSWsH Dv3TkgVUJrNf53D5sT64IhAAOqJbL60OkelSHxPEZtgIxNWHhTnyNT1m1Jh2QYvyNYqJ Tt4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=+kKh85pxhdnY1kL+7UzqrV5xVyad1ppPxeuCWtpIWyw=; b=ypluvxSAnIbMeVFygUXXCol8jrEMDnJ6khXVh9kux5y0dF8cd3geOOCVlNkjzZiVTo A6V6d4va7pP7EuroSb6SICd9cuDYC7wWivWS3AhYXdutkB5bHHAIBtrax4j70NpEKNM8 vYGzV3hvxo3AQVVG1Cu+3cxxwqXUqYT9Mv+gu91bbfJzKajJhtq9Prf0w/q1eBwCyjN9 bpyk4BJekcCFi34PNqbKnGjH2H+u0oMTkjojNmrFQlKQR3X4Z48M4g2wZSLkVw4G0Fc+ NE9EDMFv9T/n1BBOC5XkpiYXmKfcMNO9kn61BFcyKrHgy2YTC7Un0E4bmxtzibNviJcg 2bZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ct2XtX9b; spf=pass (google.com: domain of 3lq3zwwckcd0lzqbnqqfnnfkd.bnlkhmtw-lljuzbj.nqf@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3lq3ZWwcKCD0lZqbnqqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--marcorr.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id e14sor2651563otk.75.2018.10.31.06.26.46 for (Google Transport Security); Wed, 31 Oct 2018 06:26:46 -0700 (PDT) Received-SPF: pass (google.com: domain of 3lq3zwwckcd0lzqbnqqfnnfkd.bnlkhmtw-lljuzbj.nqf@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ct2XtX9b; spf=pass (google.com: domain of 3lq3zwwckcd0lzqbnqqfnnfkd.bnlkhmtw-lljuzbj.nqf@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3lq3ZWwcKCD0lZqbnqqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--marcorr.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+kKh85pxhdnY1kL+7UzqrV5xVyad1ppPxeuCWtpIWyw=; b=Ct2XtX9b23GZ26vl6VP5NA3NZg7RWq519mYx7oJ8CrLrgjowUDaiZmDRTdQDGls9mT adpzHqPePiiIeDFZMbj92pvz6N2BDauDdNyROzkGus8ImDbwPxtAbVvqr/ycqpMvQJHv S3XKoeOaUs74PlB+MOu/CD58m/hbdCNGs1SmDNmsyMF4afG/1S2Lq1rp4hF0sEa7yUIi 6GBq8IONXg300L89jOOR1PBFinf4RCMuDc1lycFEnQi2oB70ZeZoUOH2J3dIpzYyR3Nw pxXtwnSaNrHPkltm+9T5rfgPTEqid56qv/OqGBClscQUvX8F0cdH55m9IZGHlRVhYn48 MABg== X-Google-Smtp-Source: AJdET5enSEQeBJOcQVgXZpGLBcQtZd0h76K8S3C874gGlFjLhe01hHCqTSH9nWeq2d0OTbNmqW0DBLEbWib0 X-Received: by 2002:a9d:2944:: with SMTP id d62mr1890189otb.9.1540992406394; Wed, 31 Oct 2018 06:26:46 -0700 (PDT) Date: Wed, 31 Oct 2018 06:26:33 -0700 In-Reply-To: <20181031132634.50440-1-marcorr@google.com> Message-Id: <20181031132634.50440-4-marcorr@google.com> Mime-Version: 1.0 References: <20181031132634.50440-1-marcorr@google.com> X-Mailer: git-send-email 2.19.1.568.g152ad8e336-goog Subject: [kvm PATCH v5 3/4] kvm: vmx: refactor vmx_msrs struct for vmalloc From: Marc Orr To: kvm@vger.kernel.org, jmattson@google.com, rientjes@google.com, konrad.wilk@oracle.com, linux-mm@kvack.org, akpm@linux-foundation.org, pbonzini@redhat.com, rkrcmar@redhat.com, willy@infradead.org, sean.j.christopherson@intel.com, dave.hansen@linux.intel.com, kernellwp@gmail.com Cc: Marc Orr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Previously, the vmx_msrs struct relied being aligned within a struct that is backed by the direct map (e.g., memory allocated with kalloc()). Specifically, this enabled the virtual addresses associated with the struct to be translated to physical addresses. However, we'd like to refactor the host struct, vcpu_vmx, to be allocated with vmalloc(), so that allocation will succeed when contiguous physical memory is scarce. Thus, this patch refactors how vmx_msrs is declared and allocated, to ensure that it can be mapped to the physical address space, even when vmx_msrs resides within in a vmalloc()'d struct. Signed-off-by: Marc Orr --- arch/x86/kvm/vmx.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 4078cf15a4b0..315cf4b5f262 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -970,8 +970,25 @@ static inline int pi_test_sn(struct pi_desc *pi_desc) struct vmx_msrs { unsigned int nr; - struct vmx_msr_entry val[NR_AUTOLOAD_MSRS]; + struct vmx_msr_entry *val; }; +struct kmem_cache *vmx_msr_entry_cache; + +/* + * To prevent vmx_msr_entry array from crossing a page boundary, require: + * sizeof(*vmx_msrs.vmx_msr_entry.val) to be a power of two. This is guaranteed + * through compile-time asserts that: + * - NR_AUTOLOAD_MSRS * sizeof(struct vmx_msr_entry) is a power of two + * - NR_AUTOLOAD_MSRS * sizeof(struct vmx_msr_entry) <= PAGE_SIZE + * - The allocation of vmx_msrs.vmx_msr_entry.val is aligned to its size. + */ +#define CHECK_POWER_OF_TWO(val) \ + BUILD_BUG_ON_MSG(!((val) && !((val) & ((val) - 1))), \ + #val " is not a power of two.") +#define CHECK_INTRA_PAGE(val) do { \ + CHECK_POWER_OF_TWO(val); \ + BUILD_BUG_ON(!(val <= PAGE_SIZE)); \ + } while (0) struct vcpu_vmx { struct kvm_vcpu vcpu; @@ -11497,6 +11514,19 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) goto free_partial_vcpu; } + vmx->msr_autoload.guest.val = + kmem_cache_zalloc(vmx_msr_entry_cache, GFP_KERNEL); + if (!vmx->msr_autoload.guest.val) { + err = -ENOMEM; + goto free_fpu; + } + vmx->msr_autoload.host.val = + kmem_cache_zalloc(vmx_msr_entry_cache, GFP_KERNEL); + if (!vmx->msr_autoload.host.val) { + err = -ENOMEM; + goto free_msr_autoload_guest; + } + vmx->vpid = allocate_vpid(); err = kvm_vcpu_init(&vmx->vcpu, kvm, id); @@ -11584,6 +11614,10 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) kvm_vcpu_uninit(&vmx->vcpu); free_vcpu: free_vpid(vmx->vpid); + kmem_cache_free(vmx_msr_entry_cache, vmx->msr_autoload.host.val); +free_msr_autoload_guest: + kmem_cache_free(vmx_msr_entry_cache, vmx->msr_autoload.guest.val); +free_fpu: kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu); free_partial_vcpu: kmem_cache_free(kvm_vcpu_cache, vmx); @@ -15163,6 +15197,10 @@ module_exit(vmx_exit); static int __init vmx_init(void) { int r; + size_t vmx_msr_entry_size = + sizeof(struct vmx_msr_entry) * NR_AUTOLOAD_MSRS; + + CHECK_INTRA_PAGE(vmx_msr_entry_size); #if IS_ENABLED(CONFIG_HYPERV) /* @@ -15194,9 +15232,21 @@ static int __init vmx_init(void) #endif r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx), - __alignof__(struct vcpu_vmx), THIS_MODULE); + __alignof__(struct vcpu_vmx), THIS_MODULE); if (r) return r; + /* + * A vmx_msr_entry array resides exclusively within the kernel. Thus, + * use kmem_cache_create_usercopy(), with the usersize argument set to + * ZERO, to blacklist copying vmx_msr_entry to/from user space. + */ + vmx_msr_entry_cache = + kmem_cache_create_usercopy("vmx_msr_entry", vmx_msr_entry_size, + vmx_msr_entry_size, SLAB_ACCOUNT, 0, 0, NULL); + if (!vmx_msr_entry_cache) { + r = -ENOMEM; + goto out; + } /* * Must be called after kvm_init() so enable_ept is properly set @@ -15220,5 +15270,8 @@ static int __init vmx_init(void) vmx_check_vmcs12_offsets(); return 0; +out: + kvm_exit(); + return r; } module_init(vmx_init);