From patchwork Fri Feb 12 13:59:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suravee Suthikulpanit X-Patchwork-Id: 8292321 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5537FBEEE5 for ; Fri, 12 Feb 2016 14:33:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6D512203EB for ; Fri, 12 Feb 2016 14:33:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B34FD203E3 for ; Fri, 12 Feb 2016 14:33:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751076AbcBLOdR (ORCPT ); Fri, 12 Feb 2016 09:33:17 -0500 Received: from mail-bl2on0065.outbound.protection.outlook.com ([65.55.169.65]:33248 "EHLO na01-bl2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750959AbcBLOdQ (ORCPT ); Fri, 12 Feb 2016 09:33:16 -0500 Authentication-Results: 8bytes.org; dkim=none (message not signed) header.d=none; 8bytes.org; dmarc=none action=none header.from=amd.com; Received: from localhost.localdomain (124.121.8.20) by BY1PR12MB0440.namprd12.prod.outlook.com (10.162.147.141) with Microsoft SMTP Server (TLS) id 15.1.403.16; Fri, 12 Feb 2016 14:00:40 +0000 From: Suravee Suthikulpanit To: , , , CC: , , , , Suravee Suthikulpanit Subject: [PART1 RFC 4/9] KVM: x86: Detect and Initialize AVIC support Date: Fri, 12 Feb 2016 20:59:29 +0700 Message-ID: <1455285574-27892-5-git-send-email-suravee.suthikulpanit@amd.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1455285574-27892-1-git-send-email-suravee.suthikulpanit@amd.com> References: <1455285574-27892-1-git-send-email-suravee.suthikulpanit@amd.com> MIME-Version: 1.0 X-Originating-IP: [124.121.8.20] X-ClientProxiedBy: SG2PR03CA0036.apcprd03.prod.outlook.com (25.160.233.46) To BY1PR12MB0440.namprd12.prod.outlook.com (25.162.147.141) X-MS-Office365-Filtering-Correlation-Id: f2da79ae-3d24-4c05-18bb-08d333b4e63a X-Microsoft-Exchange-Diagnostics: 1; BY1PR12MB0440; 2:cj491tiHuZNOsDCPFKMJ4xBEOAqGAJGgI02bDRXVOaoNCyxLJfmUKoBajZCjiI4wv0U6fL5Gmo9MMig/sJ9cg4kCtbJUiz/FRby5Fsu+MIp7sGRwlZHsPg9Tm4br3K48POf19BRYAMBsOQheqEF4X19lTJWv3cMT11npBc0JMVylbPvh+BkDeGeta0zKwOH7; 3:/bFCShjRa2LXhreC6o9G7ojgnDQ7ZH9eZ/QNFPOdDSebEk2YMSjfOOfWmSqkrMby2fN8Xmcm4hxFXZcbaydOyKeBhcDxIftM9oZXZ02QJJ+e7D/AeB5yS4nyAiDpNNZb; 25:i8vjRX1nVWV+jUepLTfbhuugObBPlzhE8qQdD9UejxmbGelR9/bwCFBRbO5CvGDI5zs5VXgQfPO3Vn0GqNfeAtQKIp59uBi4cJAwfcpcai654R+AAqoNdS3fjSONEL/UyPN43LZ1HxGmE31aqFU9bonitBhC1ZdrUnZj/BJ74db59iLr+eWXTcq06244EbgeLIomEut9k1VKPS6UNvs0ugJjC4kSkR9xScuQ+cZgCc3CyZYsvHCiqa2k4/7FDaiY4zW0cCGeLHvwH7CZ1QAL9VoYVvFyxpeucH7AKO7yK6d5a7yV3Yk9f+abVPudd/vn X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BY1PR12MB0440; X-Microsoft-Exchange-Diagnostics: 1; BY1PR12MB0440; 20:yx1AWfPlkAm03nMjUhNIX1aDmG50QemVH7tsTf2rMKn6UDB1ST8/KbYK7Z2YX+FtHvwZAxFUA4BAm/RTiVSarZgu044ZocY4zskrwWbmqjF2wY1LAi23M9eD4lSPkk4RiFCmJZnm2K6UZ785i4NelDs2nUeXLJUKV9V/lnBFptYxpVUWjHGmbH1lv0LOKNYCFVd61+h1Vtk1aYS/HTT/lpoLJNEqbtcc++cOhbmGR1vyAGcBYpdymBK0MBSJG2kXHLosJgiXn8LYPXdeIw0RYmw6Ipip1fpz0owdXGar7gCJoxcucTYwspUA44QoVrRzXcjzpVfR3qbWBpJX/iYiflLezufLNwSC0a59DzcgCanncl/GfNM/POSmqp1aAmCi93caEn6shDAvMWPk7AM7JAHtG/vIeO6KffvXkNVvpomxkyaW+GTBP9YiuS61AQIGXNMdQ6mJok4uWSVGlrt83nS11KPtk2eoIKSm11f8ZX5XqV/o34gEwQJEBb280LfC; 4:4Mu8uAh2tVKfi86+NhOihx1iEQ8byiQcMOsWLc3hwStH7SjnQaowCCleuBl9LUYmDAQHSs+njIhyz8EGTlYE8FSCzeXSxzDHSqDuFHcUJUpbVazT8T5bi2tXTxQZUFD48j6lawmBMJLDTXfdobAODm5iDFkRluqp5tOeU+WUaRU/k2yxcwzKOjisL78QMPiL1sZ32ydLVXryFekqQVxiYJNqA3MsRTXe1HRWQAt0fLRMiRXC1ohXzJAwfrHDrfcsYCHvbC6XDgot5i1HhTDkRi6JMncdIm38JK2P189+MGKK1bOA96B9bi2qN1uMxG83IDzbvlvg8K2fRuFSYmqMziqmHvHpPb5Uv01KUrkigbu78feVSpEZ40kyd5Quy2zPSb/S/CV2T5ikT6y7ttQDvfD+0au9Rdj9nbcggn ABh3E= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046); SRVR:BY1PR12MB0440; BCL:0; PCL:0; RULEID:; SRVR:BY1PR12MB0440; X-Forefront-PRVS: 0850800A29 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(6069001)(2201001)(36756003)(48376002)(586003)(40100003)(92566002)(3846002)(5003940100001)(77096005)(229853001)(2950100001)(19580395003)(19580405001)(6116002)(5004730100002)(189998001)(4326007)(42186005)(66066001)(50226001)(87976001)(5008740100001)(5001770100001)(122386002)(33646002)(50466002)(76176999)(50986999)(86362001)(5001960100002)(47776003)(1096002)(2906002); DIR:OUT; SFP:1101; SCL:1; SRVR:BY1PR12MB0440; H:localhost.localdomain; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BY1PR12MB0440; 23:kvHVABcVwxwfn6nOmiBzq1XzAt/hiHPSUA4t77Zai?= =?us-ascii?Q?r3X7dMzfhyXXyC/iv1RLh+fru+fVl3qq4UVhDAAMpFg1kZzYwhRRbCYtl0U5?= =?us-ascii?Q?JSppy7+Ya7XbP94PwitOMr8G2XYB96hDr2tNwbITLXonywljxa3cP24AlUdh?= =?us-ascii?Q?JrAY3JBK4tg6CoJt3uIHYVYyWP2w/vARbkz/IEkPi1HPEZ7WWP0IG46VmN6i?= =?us-ascii?Q?5O7AFKNv5bwK/VFcGU9KZTjs4QDYHhkbekQjzZGRNj7ZyISef8wSC1vJtKwg?= =?us-ascii?Q?NnbkTYYZjKnTYiZYdJTuJg0M6MgYxMnV/TemLk2rlswnkahm/y/iH4LtB3k4?= =?us-ascii?Q?qsyd/7ZiJ4DYkfDxCi1iEzTCvhHsNkc8mReiMO1P3vMAea/4XVFsSr7UYXJS?= =?us-ascii?Q?iqXNovgPZhNAbTy4XBk3g7wCDL5b/m7qlW0B4YIwk2MHmGLon2AK5/ZrjohO?= =?us-ascii?Q?POO+MjxkqpGp9axlXXAi+6h3q05BFYUi/mZkP7zVE68hIgDTNDlDIvnoo9dc?= =?us-ascii?Q?JlVVVBLIdFjkx4WyXOaldNeYup7mFuGtK+9DISbFJAEKU+PotW6pzBJYLXeE?= =?us-ascii?Q?x30TDprG2ri9OoLO1qYqYd761VTd3x1LQKHzb2Rg/mqc3u3VF5ihPyskmY24?= =?us-ascii?Q?YiQklMV2jB9D3rHEdwmwPFgPSBa8jzlvvUKMUB/CxKPBfCwVaeL6i4XkDE/m?= =?us-ascii?Q?DTgfRpX3gVQ9vbC0sGg26qXt2V4vON4+Q2OmI/ZdZzkHAjPlsnB1eYAbaT2y?= =?us-ascii?Q?YgBejcT+8vINoKFiP/dSA+UXsGLppOVeyeZHuzX0LnEankWVYao4bNG0NkAK?= =?us-ascii?Q?tIJ1vl1/yELhigN6I20kkHFbOCdBPwum45fjEuyKfRHzoVJXeJ4Du2gwo3T8?= =?us-ascii?Q?EbwvzoEVWtNcEbMqclPdLu94Gos1kuqnQshFwcJYIA+DiegpXt+iIcgnVo07?= =?us-ascii?Q?f5Yej+onwnVnvGL4Op6riKINYfhCqQSePE8y6K+fg=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BY1PR12MB0440; 5:uhOr01laatTha7venmKUOBTV6KGV8e4Yye2TQFKNeLDaPhJXPrkqt6oLldhsYCsRxbdL9DmsURTMN7Z0IVTDifaOWft9Df2x7sKpH2/4NRyWtcJRg4S6rj8jtFP2USFeLT3J6lh0OSKJ/6+XI7yc3Q==; 24:RTCHGBx/vVtJ6eHoFWVIbOF/2YiWRkMNpjVwVWwE3e9NN+ernoj4hNyvipVPDmKJ3Tv158HeevHfk3ITCX0OpnBa0NO3SFgE92pdPczbfmM=; 20:zPkMTbNCAjkHyafIp6WEMcBXiT9KfzMHl1tTSKOPkC3KjKIEh6tV6IgFqOZlxESzIdMmjiK9Bw4xg7Xo4hObIk/byAZaX9CvF4N8/+avqQ90dT1+iFhehT6z2KdMyYlssLapxeDSx0q/YcD2PCdIjce7cfdJ5032JC6DO6zj0zXLASeCB/MEpOp/e0QeP7T2RbyN6FBo72Yo4rbnQxdjwujtuwcaiF581XRlO8WwrIlmqdQNDLNw1p5igsUKxOXZ X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Feb 2016 14:00:40.7661 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR12MB0440 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Suravee Suthikulpanit This patch introduces AVIC-related data structure, and AVIC intitialization code. There are three main data structures for AVIC: * Virtual APIC (vAPIC) backing page (per-VCPU) * Physical APIC ID table (per-VM) * Logical APIC ID table (per-VM) In order to accommodate the new per-VM tables, we introduce a new per-VM arch-specific void pointer, struct kvm_arch.arch_data. This will point to the newly introduced struct svm_vm_data. This patch also introduces code to detect the new new SVM feature CPUID Fn8000_000A_EDX[13], which identifies support for AMD Advance Virtual Interrupt Controller (AVIC). Currently, AVIC is disabled by default. Users can manually enable AVIC via kernel boot option kvm-amd.avic=1 or during kvm-amd module loading with parameter avic=1. Signed-off-by: Suravee Suthikulpanit --- arch/x86/include/asm/cpufeature.h | 1 + arch/x86/include/asm/kvm_host.h | 2 + arch/x86/kernel/cpu/scattered.c | 1 + arch/x86/kvm/svm.c | 404 +++++++++++++++++++++++++++++++++++++- 4 files changed, 407 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 7ad8c94..ee85900 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -203,6 +203,7 @@ #define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer vmmcall to vmcall */ #define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */ +#define X86_FEATURE_AVIC ( 8*32+17) /* AMD Virtual Interrupt Controller support */ /* Intel-defined CPU features, CPUID level 0x00000007:0 (ebx), word 9 */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 44adbb8..7b78328 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -754,6 +754,8 @@ struct kvm_arch { bool irqchip_split; u8 nr_reserved_ioapic_pins; + + void *arch_data; }; struct kvm_vm_stat { diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c index 8cb57df..88cfbe7 100644 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c @@ -37,6 +37,7 @@ void init_scattered_cpuid_features(struct cpuinfo_x86 *c) { X86_FEATURE_HW_PSTATE, CR_EDX, 7, 0x80000007, 0 }, { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007, 0 }, { X86_FEATURE_PROC_FEEDBACK, CR_EDX,11, 0x80000007, 0 }, + { X86_FEATURE_AVIC, CR_EDX,13, 0x8000000a, 0 }, { 0, 0, 0, 0, 0 } }; diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index ca185fb..9440b48 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -78,6 +78,11 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id); #define TSC_RATIO_MIN 0x0000000000000001ULL #define TSC_RATIO_MAX 0x000000ffffffffffULL +#define AVIC_HPA_MASK ~((0xFFFULL << 52) || 0xFFF) + +/* NOTE: Current max index allowed for physical APIC ID table is 255 */ +#define AVIC_PHY_APIC_ID_MAX 0xFF + static bool erratum_383_found __read_mostly; static const u32 host_save_user_msrs[] = { @@ -162,6 +167,36 @@ struct vcpu_svm { /* cached guest cpuid flags for faster access */ bool nrips_enabled : 1; + + struct page *avic_bk_page; +}; + +struct __attribute__ ((__packed__)) +svm_avic_log_ait_entry { + u32 guest_phy_apic_id : 8, + res : 23, + valid : 1; +}; + +struct __attribute__ ((__packed__)) +svm_avic_phy_ait_entry { + u64 host_phy_apic_id : 8, + res1 : 4, + bk_pg_ptr : 40, + res2 : 10, + is_running : 1, + valid : 1; +}; + +/* Note: This structure is per VM */ +struct svm_vm_data { + atomic_t count; + u32 ldr_mode; + u32 avic_max_vcpu_id; + u32 avic_tag; + + struct page *avic_log_ait_page; + struct page *avic_phy_ait_page; }; static DEFINE_PER_CPU(u64, current_tsc_ratio); @@ -205,6 +240,10 @@ module_param(npt, int, S_IRUGO); static int nested = true; module_param(nested, int, S_IRUGO); +/* enable / disable AVIC */ +static int avic = false; +module_param(avic, int, S_IRUGO); + static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); static void svm_flush_tlb(struct kvm_vcpu *vcpu); static void svm_complete_interrupts(struct vcpu_svm *svm); @@ -234,6 +273,13 @@ enum { /* TPR and CR2 are always written before VMRUN */ #define VMCB_ALWAYS_DIRTY_MASK ((1U << VMCB_INTR) | (1U << VMCB_CR2)) +#define VMCB_AVIC_APIC_BAR_MASK 0xFFFFFFFFFF000ULL + +static inline void avic_update_vapic_bar(struct vcpu_svm *svm, u64 data) +{ + svm->vmcb->control.avic_vapic_bar = data & VMCB_AVIC_APIC_BAR_MASK; +} + static inline void mark_all_dirty(struct vmcb *vmcb) { vmcb->control.clean = 0; @@ -923,6 +969,13 @@ static __init int svm_hardware_setup(void) } else kvm_disable_tdp(); + if (avic && (!npt_enabled || !boot_cpu_has(X86_FEATURE_AVIC))) + avic = false; + + if (avic) { + printk(KERN_INFO "kvm: AVIC enabled\n"); + } + return 0; err: @@ -1000,6 +1053,27 @@ static void svm_adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, s64 adjustment) mark_dirty(svm->vmcb, VMCB_INTERCEPTS); } +static void avic_init_vmcb(struct vcpu_svm *svm) +{ + struct vmcb *vmcb = svm->vmcb; + struct svm_vm_data *vm_data = svm->vcpu.kvm->arch.arch_data; + phys_addr_t bpa = PFN_PHYS(page_to_pfn(svm->avic_bk_page)); + phys_addr_t lpa = PFN_PHYS(page_to_pfn(vm_data->avic_log_ait_page)); + phys_addr_t ppa = PFN_PHYS(page_to_pfn(vm_data->avic_phy_ait_page)); + + if (!vmcb) + return; + + pr_debug("SVM: %s: bpa=%#llx, lpa=%#llx, ppa=%#llx\n", + __func__, bpa, lpa, ppa); + + vmcb->control.avic_enable = 1; + vmcb->control.avic_bk_page = bpa & AVIC_HPA_MASK; + vmcb->control.avic_log_apic_id = lpa & AVIC_HPA_MASK; + vmcb->control.avic_phy_apic_id = ppa & AVIC_HPA_MASK; + vmcb->control.avic_phy_apic_id |= AVIC_PHY_APIC_ID_MAX; +} + static void init_vmcb(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; @@ -1113,6 +1187,309 @@ static void init_vmcb(struct vcpu_svm *svm) mark_all_dirty(svm->vmcb); enable_gif(svm); + + if (avic) + avic_init_vmcb(svm); +} + +static struct svm_avic_phy_ait_entry * +avic_get_phy_ait_entry(struct kvm_vcpu *vcpu, int index) +{ + struct svm_avic_phy_ait_entry *avic_phy_ait; + struct svm_vm_data *vm_data = vcpu->kvm->arch.arch_data; + + if (!vm_data) + return NULL; + + /* Note: APIC ID = 0xff is used for broadcast. + * APIC ID > 0xff is reserved. + */ + if (index >= 0xff) + return NULL; + + avic_phy_ait = page_address(vm_data->avic_phy_ait_page); + + return &avic_phy_ait[index]; +} + +struct svm_avic_log_ait_entry * +avic_get_log_ait_entry(struct kvm_vcpu *vcpu, u8 mda, bool is_flat) +{ + struct svm_vm_data *vm_data = vcpu->kvm->arch.arch_data; + int index; + struct svm_avic_log_ait_entry *avic_log_ait; + + if (!vm_data) + return NULL; + + if (is_flat) { /* flat */ + if (mda > 7) + return NULL; + index = mda; + } else { /* cluster */ + int apic_id = mda & 0xf; + int cluster_id = (mda & 0xf0) >> 8; + + if (apic_id > 4 || cluster_id >= 0xf) + return NULL; + index = (cluster_id << 2) + apic_id; + } + avic_log_ait = (struct svm_avic_log_ait_entry *) + page_address(vm_data->avic_log_ait_page); + + return &avic_log_ait[index]; +} + +static inline void avic_set_bk_page_entry(struct vcpu_svm *svm, int reg_off, u32 val) +{ + void *avic_bk = page_address(svm->avic_bk_page); + + *((u32 *) (avic_bk + reg_off)) = val; +} + +static inline u32 *avic_get_bk_page_entry(struct vcpu_svm *svm, u32 offset) +{ + char *tmp = (char*)page_address(svm->avic_bk_page); + + return (u32*)(tmp+offset); +} + +static int avic_init_log_apic_entry(struct kvm_vcpu *vcpu, u8 g_phy_apic_id, + u8 log_apic_id) +{ + u32 mod; + struct svm_avic_log_ait_entry *entry; + struct vcpu_svm *svm = to_svm(vcpu); + + if (!svm) + return -EINVAL; + + mod = (*avic_get_bk_page_entry(svm, APIC_DFR) >> 28) & 0xf; + entry = avic_get_log_ait_entry(vcpu, log_apic_id, (mod == 0xf)); + if (!entry) + return -EINVAL; + entry->guest_phy_apic_id = g_phy_apic_id; + entry->valid = 1; + + return 0; +} + +static int avic_init_bk_page(struct kvm_vcpu *vcpu) +{ + int i; + u64 addr; + struct page *page; + int id = vcpu->vcpu_id; + struct kvm *kvm = vcpu->kvm; + struct kvm_lapic *apic = vcpu->arch.apic; + struct vcpu_svm *svm = to_svm(vcpu); + + addr = APIC_DEFAULT_PHYS_BASE + (id * PAGE_SIZE); + page = gfn_to_page(kvm, addr >> PAGE_SHIFT); + if (is_error_page(page)) + return -EFAULT; + + /* + * Do not pin the page in memory, so that memory hot-unplug + * is able to migrate it. + */ + put_page(page); + + /* Setting up AVIC Backing Page */ + svm->avic_bk_page = page; + clear_page(kmap(page)); + pr_debug("SVM: %s: vAPIC bk page: cpu=%u, addr=%#llx, pa=%#llx\n", + __func__, id, addr, + (unsigned long long) PFN_PHYS(page_to_pfn(page))); + + avic_set_bk_page_entry(svm, APIC_ID, kvm_apic_get_reg(apic, APIC_ID)); + avic_set_bk_page_entry(svm, APIC_LVR, kvm_apic_get_reg(apic, APIC_LVR)); + for (i = 0; i < KVM_APIC_LVT_NUM; i++) + avic_set_bk_page_entry(svm, APIC_LVTT + 0x10 * i, APIC_LVT_MASKED); + avic_set_bk_page_entry(svm, APIC_LVT0, + SET_APIC_DELIVERY_MODE(0, APIC_MODE_EXTINT)); + avic_set_bk_page_entry(svm, APIC_DFR, 0xffffffffU); + avic_set_bk_page_entry(svm, APIC_SPIV, 0xff); + avic_set_bk_page_entry(svm, APIC_TASKPRI, 0); + avic_set_bk_page_entry(svm, APIC_LDR, kvm_apic_get_reg(apic, APIC_LDR)); + avic_set_bk_page_entry(svm, APIC_ESR, 0); + avic_set_bk_page_entry(svm, APIC_ICR, 0); + avic_set_bk_page_entry(svm, APIC_ICR2, 0); + avic_set_bk_page_entry(svm, APIC_TDCR, 0); + avic_set_bk_page_entry(svm, APIC_TMICT, 0); + for (i = 0; i < 8; i++) { + avic_set_bk_page_entry(svm, APIC_IRR + 0x10 * i, 0); + avic_set_bk_page_entry(svm, APIC_ISR + 0x10 * i, 0); + avic_set_bk_page_entry(svm, APIC_TMR + 0x10 * i, 0); + } + + avic_init_vmcb(svm); + + return 0; +} + +static inline void avic_unalloc_bk_page(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + if (svm->avic_bk_page) + kunmap(svm->avic_bk_page); +} + +static int avic_alloc_bk_page(struct vcpu_svm *svm, int id) +{ + int ret = 0, i; + bool realloc = false; + struct kvm_vcpu *vcpu; + struct kvm *kvm = svm->vcpu.kvm; + struct svm_vm_data *vm_data = kvm->arch.arch_data; + + mutex_lock(&kvm->slots_lock); + + /* Check if we have already allocated vAPIC backing + * page for this vCPU. If not, we need to realloc + * a new one and re-assign all other vCPU. + */ + if (kvm->arch.apic_access_page_done && + (id > vm_data->avic_max_vcpu_id)) { + kvm_for_each_vcpu(i, vcpu, kvm) + avic_unalloc_bk_page(vcpu); + + __x86_set_memory_region(kvm, + APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, + 0 , 0); + realloc = true; + vm_data->avic_max_vcpu_id = 0; + } + + /* + * We are allocating vAPIC backing page + * upto the max vCPU ID + */ + if (id >= vm_data->avic_max_vcpu_id) { + ret = __x86_set_memory_region(kvm, + APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, + APIC_DEFAULT_PHYS_BASE, + PAGE_SIZE * (id + 1)); + if (ret) + goto out; + + vm_data->avic_max_vcpu_id = id; + } + + /* Reinit vAPIC backing page for exisinting vcpus */ + if (realloc) + kvm_for_each_vcpu(i, vcpu, kvm) + avic_init_bk_page(vcpu); + + avic_init_bk_page(&svm->vcpu); + + kvm->arch.apic_access_page_done = true; + +out: + mutex_unlock(&kvm->slots_lock); + return ret; +} + +static void avic_vm_uninit(struct kvm *kvm) +{ + struct svm_vm_data *vm_data = kvm->arch.arch_data; + + if (!vm_data) + return; + + if (vm_data->avic_log_ait_page) + __free_page(vm_data->avic_log_ait_page); + if (vm_data->avic_phy_ait_page) + __free_page(vm_data->avic_phy_ait_page); + kfree(vm_data); + kvm->arch.arch_data = NULL; +} + +static void avic_vcpu_uninit(struct kvm_vcpu *vcpu) +{ + struct svm_vm_data *vm_data = vcpu->kvm->arch.arch_data; + + avic_unalloc_bk_page(vcpu); + + if (vm_data && + (atomic_read(&vm_data->count) == 0 || + atomic_dec_and_test(&vm_data->count))) + avic_vm_uninit(vcpu->kvm); +} + +static atomic_t avic_tag_gen = ATOMIC_INIT(1); + +static inline u32 avic_get_next_tag(void) +{ + u32 tag = atomic_read(&avic_tag_gen); + + atomic_inc(&avic_tag_gen); + return tag; +} + +static int avic_vm_init(struct kvm *kvm) +{ + int err = -ENOMEM; + struct svm_vm_data *vm_data; + struct page *avic_phy_ait_page; + struct page *avic_log_ait_page; + + vm_data = kzalloc(sizeof(struct svm_vm_data), + GFP_KERNEL); + if (!vm_data) + return err; + + kvm->arch.arch_data = vm_data; + atomic_set(&vm_data->count, 0); + + /* Allocating physical APIC ID table (4KB) */ + avic_phy_ait_page = alloc_page(GFP_KERNEL); + if (!avic_phy_ait_page) + goto free_avic; + + vm_data->avic_phy_ait_page = avic_phy_ait_page; + clear_page(page_address(avic_phy_ait_page)); + + /* Allocating logical APIC ID table (4KB) */ + avic_log_ait_page = alloc_page(GFP_KERNEL); + if (!avic_log_ait_page) + goto free_avic; + + vm_data->avic_log_ait_page = avic_log_ait_page; + clear_page(page_address(avic_log_ait_page)); + + vm_data->avic_tag = avic_get_next_tag(); + + return 0; + +free_avic: + avic_vm_uninit(kvm); + return err; +} + +static int avic_vcpu_init(struct kvm *kvm, struct vcpu_svm *svm, int id) +{ + int err; + struct svm_vm_data *vm_data = NULL; + + /* Note: svm_vm_data is per VM */ + if (!kvm->arch.arch_data) { + err = avic_vm_init(kvm); + if (err) + return err; + } + + err = avic_alloc_bk_page(svm, id); + if (err) { + avic_vcpu_uninit(&svm->vcpu); + return err; + } + + vm_data = kvm->arch.arch_data; + atomic_inc(&vm_data->count); + + return 0; } static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) @@ -1131,6 +1508,9 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) kvm_cpuid(vcpu, &eax, &dummy, &dummy, &dummy); kvm_register_write(vcpu, VCPU_REGS_RDX, eax); + + if (avic && !init_event) + avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE); } static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) @@ -1169,6 +1549,12 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) if (!hsave_page) goto free_page3; + if (avic) { + err = avic_vcpu_init(kvm, svm, id); + if (err) + goto free_page4; + } + svm->nested.hsave = page_address(hsave_page); svm->msrpm = page_address(msrpm_pages); @@ -1187,6 +1573,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) return &svm->vcpu; +free_page4: + __free_page(hsave_page); free_page3: __free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER); free_page2: @@ -1209,6 +1597,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); __free_page(virt_to_page(svm->nested.hsave)); __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); + avic_vcpu_uninit(vcpu); kvm_vcpu_uninit(vcpu); kmem_cache_free(kvm_vcpu_cache, svm); } @@ -3372,6 +3761,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu) pr_err("%-20s%08x\n", "exit_int_info_err:", control->exit_int_info_err); pr_err("%-20s%lld\n", "nested_ctl:", control->nested_ctl); pr_err("%-20s%016llx\n", "nested_cr3:", control->nested_cr3); + pr_err("%-20s%016llx\n", "avic_vapic_bar:", control->avic_vapic_bar); pr_err("%-20s%08x\n", "event_inj:", control->event_inj); pr_err("%-20s%08x\n", "event_inj_err:", control->event_inj_err); pr_err("%-20s%lld\n", "lbr_ctl:", control->lbr_ctl); @@ -3603,7 +3993,17 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set) static bool svm_get_enable_apicv(void) { - return false; + return avic; +} + +static void svm_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr) +{ + return; +} + +static void svm_hwapic_isr_update(struct kvm *kvm, int isr) +{ + return; } static void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu) @@ -4375,6 +4775,8 @@ static struct kvm_x86_ops svm_x86_ops = { .refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl, .load_eoi_exitmap = svm_load_eoi_exitmap, .sync_pir_to_irr = svm_sync_pir_to_irr, + .hwapic_irr_update = svm_hwapic_irr_update, + .hwapic_isr_update = svm_hwapic_isr_update, .set_tss_addr = svm_set_tss_addr, .get_tdp_level = get_npt_level,