From patchwork Fri Jun 5 21:38:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11590611 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01A57138C for ; Fri, 5 Jun 2020 21:40:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A0FB8206DC for ; Fri, 5 Jun 2020 21:40:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="UEFS9o5s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0FB8206DC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hY+YGL/upelGJHckXx+UeVszZOZVnfdIS3nOEF8mFK0=; b=UEFS9o5sYNuTQj OsZ8OkpSQIBudvA6W00z4RtnwLT1O6/hB0vm5PM2yrE/x2op1540UCdHv6Z2c0+jOsiDYXfJW0r+L jaaiG1MHLzytSBApKQlEz45zy2buuI0sq39lW9XzNkftFm6Q5cOQIq5FqToOWUblA3w0bnJNQISzN zkgJ79apyrfMRAsUymXz/CCW8zOrPkmrBtk6yAd+5dql7DUroUZ/GvQAQ9jEP2c5cU7DhrvTsxu0W tDqsg00LufjbSoFmHXHqf6faAfjwAnSN0hViRe+OhzNJTE5hlUH+uVkeAqHBb+khYQDvNy+U3AiTa zd4YlIPHIP6JuRWtF8+A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jhK4m-0006Jr-CF; Fri, 05 Jun 2020 21:40:32 +0000 Received: from mga01.intel.com ([192.55.52.88]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jhK3R-0002ly-Ih for linux-arm-kernel@lists.infradead.org; Fri, 05 Jun 2020 21:39:12 +0000 IronPort-SDR: VsTfHZ3sRagftvoT0+iAqm6DnQShkVlBESokwIxCLaVjqHJCjn8v3jelJKVuuFndpxaOG2x/51 L+PzLUKHLFlw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 14:39:08 -0700 IronPort-SDR: i7adhaepK8ME1pSI2ECuoi5eKdTlo5J8cLg95rKXIMokAOoUD9wAcpYWu91uttwijh6UkSD6/I AU8/iwhnP7Ew== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,477,1583222400"; d="scan'208";a="287860908" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by orsmga002.jf.intel.com with ESMTP; 05 Jun 2020 14:39:08 -0700 From: Sean Christopherson To: Marc Zyngier , Paul Mackerras , Christian Borntraeger , Janosch Frank , Paolo Bonzini Subject: [PATCH 14/21] KVM: Move x86's version of struct kvm_mmu_memory_cache to common code Date: Fri, 5 Jun 2020 14:38:46 -0700 Message-Id: <20200605213853.14959-15-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200605213853.14959-1-sean.j.christopherson@intel.com> References: <20200605213853.14959-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200605_143909_655986_890F4E8E X-CRM114-Status: GOOD ( 14.98 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at https://www.dnswl.org/, high trust [192.55.52.88 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [192.55.52.88 listed in wl.mailspike.net] 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , kvm@vger.kernel.org, David Hildenbrand , linux-mips@vger.kernel.org, Ben Gardon , Claudio Imbrenda , kvmarm@lists.cs.columbia.edu, Joerg Roedel , Julien Thierry , Junaid Shahid , Suzuki K Poulose , kvm-ppc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Jim Mattson , Cornelia Huck , Peter Shier , Christoffer Dall , Sean Christopherson , linux-kernel@vger.kernel.org, James Morse , Vitaly Kuznetsov , Peter Feiner Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Move x86's 'struct kvm_mmu_memory_cache' to common code in anticipation of moving the entire x86 implementation code to common KVM and reusing it for arm64 and MIPS. Add a new architecture specific asm/kvm_types.h to control the existence and parameters of the struct. The new header is needed to avoid a chicken-and-egg problem with asm/kvm_host.h as all architectures define instances of the struct in their vCPU structs. Suggested-by: Christoffer Dall Signed-off-by: Sean Christopherson Reviewed-by: Ben Gardon --- arch/arm64/include/asm/kvm_types.h | 6 ++++++ arch/mips/include/asm/kvm_types.h | 5 +++++ arch/powerpc/include/asm/kvm_types.h | 5 +++++ arch/s390/include/asm/kvm_types.h | 5 +++++ arch/x86/include/asm/kvm_host.h | 13 ------------- arch/x86/include/asm/kvm_types.h | 7 +++++++ include/linux/kvm_types.h | 19 +++++++++++++++++++ 7 files changed, 47 insertions(+), 13 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_types.h create mode 100644 arch/mips/include/asm/kvm_types.h create mode 100644 arch/powerpc/include/asm/kvm_types.h create mode 100644 arch/s390/include/asm/kvm_types.h create mode 100644 arch/x86/include/asm/kvm_types.h diff --git a/arch/arm64/include/asm/kvm_types.h b/arch/arm64/include/asm/kvm_types.h new file mode 100644 index 000000000000..d0987007d581 --- /dev/null +++ b/arch/arm64/include/asm/kvm_types.h @@ -0,0 +1,6 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM64_KVM_TYPES_H +#define _ASM_ARM64_KVM_TYPES_H + +#endif /* _ASM_ARM64_KVM_TYPES_H */ + diff --git a/arch/mips/include/asm/kvm_types.h b/arch/mips/include/asm/kvm_types.h new file mode 100644 index 000000000000..5efeb32a5926 --- /dev/null +++ b/arch/mips/include/asm/kvm_types.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_MIPS_KVM_TYPES_H +#define _ASM_MIPS_KVM_TYPES_H + +#endif /* _ASM_MIPS_KVM_TYPES_H */ diff --git a/arch/powerpc/include/asm/kvm_types.h b/arch/powerpc/include/asm/kvm_types.h new file mode 100644 index 000000000000..f627eceaa314 --- /dev/null +++ b/arch/powerpc/include/asm/kvm_types.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_KVM_TYPES_H +#define _ASM_POWERPC_KVM_TYPES_H + +#endif /* _ASM_POWERPC_KVM_TYPES_H */ diff --git a/arch/s390/include/asm/kvm_types.h b/arch/s390/include/asm/kvm_types.h new file mode 100644 index 000000000000..b66a81f8a354 --- /dev/null +++ b/arch/s390/include/asm/kvm_types.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_S390_KVM_TYPES_H +#define _ASM_S390_KVM_TYPES_H + +#endif /* _ASM_S390_KVM_TYPES_H */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fb99e6776e27..8e8fea13b6c7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -193,8 +193,6 @@ struct x86_exception; enum x86_intercept; enum x86_intercept_stage; -#define KVM_NR_MEM_OBJS 40 - #define KVM_NR_DB_REGS 4 #define DR6_BD (1 << 13) @@ -245,17 +243,6 @@ enum x86_intercept_stage; struct kvm_kernel_irq_routing_entry; -/* - * We don't want allocation failures within the mmu code, so we preallocate - * enough memory for a single page fault in a cache. - */ -struct kvm_mmu_memory_cache { - int nobjs; - gfp_t gfp_zero; - struct kmem_cache *kmem_cache; - void *objects[KVM_NR_MEM_OBJS]; -}; - /* * the pages used as guest page table on soft mmu are tracked by * kvm_memory_slot.arch.gfn_track which is 16 bits, so the role bits used diff --git a/arch/x86/include/asm/kvm_types.h b/arch/x86/include/asm/kvm_types.h new file mode 100644 index 000000000000..08f1b57d3b62 --- /dev/null +++ b/arch/x86/include/asm/kvm_types.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_KVM_TYPES_H +#define _ASM_X86_KVM_TYPES_H + +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 + +#endif /* _ASM_X86_KVM_TYPES_H */ diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 68e84cf42a3f..a7580f69dda0 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -20,6 +20,8 @@ enum kvm_mr_change; #include +#include + /* * Address types: * @@ -58,4 +60,21 @@ struct gfn_to_pfn_cache { bool dirty; }; +#ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE +/* + * Memory caches are used to preallocate memory ahead of various MMU flows, + * e.g. page fault handlers. Gracefully handling allocation failures deep in + * MMU flows is problematic, as is triggering reclaim, I/O, etc... while + * holding MMU locks. Note, these caches act more like prefetch buffers than + * classical caches, i.e. objects are not returned to the cache on being freed. + */ +struct kvm_mmu_memory_cache { + int nobjs; + gfp_t gfp_zero; + struct kmem_cache *kmem_cache; + void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE]; +}; +#endif + + #endif /* __KVM_TYPES_H__ */