From patchwork Thu Feb 16 16:48:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 9577723 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B96D560209 for ; Thu, 16 Feb 2017 16:50:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A90A72862B for ; Thu, 16 Feb 2017 16:50:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9BD4028646; Thu, 16 Feb 2017 16:50:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20C872862B for ; Thu, 16 Feb 2017 16:50:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932564AbdBPQtl (ORCPT ); Thu, 16 Feb 2017 11:49:41 -0500 Received: from merlin.infradead.org ([205.233.59.134]:44264 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932254AbdBPQtk (ORCPT ); Thu, 16 Feb 2017 11:49:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=sG/Ym+1vGfi9FQlST+40HjXLl24bt2dEV5P+4WcQSyM=; b=IJekljhYU+rgIh2SShb/p1s5F wQwOVGIkdlQVU0rtwgl7/Vd+/DFWA8vcIVLW1n4hDfZareWQI7+Ujk5DkVruWHXlKhTVmf8cYw3ZN Qi6QgFGLfDlKp03G8BbIho1dgOAObkMGywMO+sFfUT6yXnqWrMzYA9dNWsqTfikR4+Qo+Vfz8NkLT 3MxSbLg2W/r2upBrIcNyocxx9HaP7pI27UV3dSs0N0FH5l5YEawGyBV18iqvrT5bJOhvFd8pNxDo9 DqDNS5YhbcHdfW+qAb+6+0QCC3OSIw8ENxl4k/mlzVFHSuKJPWM5C69NDv2FSEBAVou9lwJV5kGoy /rBvCTktQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=twins.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1cePE5-0006M8-T1; Thu, 16 Feb 2017 16:48:14 +0000 Received: by twins.programming.kicks-ass.net (Postfix, from userid 1000) id 3B2ED1256AA9C; Thu, 16 Feb 2017 17:48:15 +0100 (CET) Date: Thu, 16 Feb 2017 17:48:15 +0100 From: Peter Zijlstra To: Waiman Long Cc: Jeremy Fitzhardinge , Chris Wright , Alok Kataria , Rusty Russell , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Pan Xinhui , Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , Boris Ostrovsky , Juergen Gross , andrew.cooper3@citrix.com Subject: Re: [PATCH v4 2/2] x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64 Message-ID: <20170216164815.GD6515@twins.programming.kicks-ass.net> References: <1487194670-6319-1-git-send-email-longman@redhat.com> <1487194670-6319-3-git-send-email-longman@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1487194670-6319-3-git-send-email-longman@redhat.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Wed, Feb 15, 2017 at 04:37:50PM -0500, Waiman Long wrote: > +/* > + * Hand-optimize version for x86-64 to avoid 8 64-bit register saving and > + * restoring to/from the stack. It is assumed that the preempted value > + * is at an offset of 16 from the beginning of the kvm_steal_time structure > + * which is verified by the BUILD_BUG_ON() macro below. > + */ > +#define PREEMPTED_OFFSET 16 As per Andrew's suggestion, the 'right' way is something like so. --- asm-offsets_64.c | 11 +++++++++++ kvm.c | 14 ++++---------- 2 files changed, 15 insertions(+), 10 deletions(-) --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -13,6 +13,10 @@ static char syscalls_ia32[] = { #include }; +#ifdef CONFIG_KVM_GUEST +#include +#endif + int main(void) { #ifdef CONFIG_PARAVIRT @@ -22,6 +26,13 @@ int main(void) BLANK(); #endif +#ifdef CONFIG_KVM_GUEST +#ifdef CONFIG_PARAVIRT_SPINLOCKS + OFFSET(KVM_STEAL_TIME_preempted, kvm_steal_time, preempted); + BLANK(); +#endif +#endif + #define ENTRY(entry) OFFSET(pt_regs_ ## entry, pt_regs, entry) ENTRY(bx); ENTRY(cx); --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -600,22 +600,21 @@ PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_ #else +#include + extern bool __raw_callee_save___kvm_vcpu_is_preempted(long); /* * Hand-optimize version for x86-64 to avoid 8 64-bit register saving and - * restoring to/from the stack. It is assumed that the preempted value - * is at an offset of 16 from the beginning of the kvm_steal_time structure - * which is verified by the BUILD_BUG_ON() macro below. + * restoring to/from the stack. */ -#define PREEMPTED_OFFSET 16 asm( ".pushsection .text;" ".global __raw_callee_save___kvm_vcpu_is_preempted;" ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" "__raw_callee_save___kvm_vcpu_is_preempted:" "movq __per_cpu_offset(,%rdi,8), %rax;" -"cmpb $0, " __stringify(PREEMPTED_OFFSET) "+steal_time(%rax);" +"cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);" "setne %al;" "ret;" ".popsection"); @@ -627,11 +626,6 @@ asm( */ void __init kvm_spinlock_init(void) { -#ifdef CONFIG_X86_64 - BUILD_BUG_ON((offsetof(struct kvm_steal_time, preempted) - != PREEMPTED_OFFSET) || (sizeof(steal_time.preempted) != 1)); -#endif - if (!kvm_para_available()) return; /* Does host kernel support KVM_FEATURE_PV_UNHALT? */