From patchwork Mon Dec 7 20:36:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 7789631 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 36B80BEEE1 for ; Mon, 7 Dec 2015 20:36:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5664E20555 for ; Mon, 7 Dec 2015 20:36:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6601220490 for ; Mon, 7 Dec 2015 20:36:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932831AbbLGUgv (ORCPT ); Mon, 7 Dec 2015 15:36:51 -0500 Received: from mail-pa0-f46.google.com ([209.85.220.46]:36406 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932769AbbLGUgu (ORCPT ); Mon, 7 Dec 2015 15:36:50 -0500 Received: by pacdm15 with SMTP id dm15so130298925pac.3 for ; Mon, 07 Dec 2015 12:36:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=cRor0N4gOD0Qn0KlazW1SZvTMoUzuhm3/mb+BmIYicQ=; b=Y8SfmWAvICjohOXQN9rw6KzOe9XSJVRRRwMyN63+w4eDKYeLkNjfEaQALFCLwZiSIf PQmz/zxIKFVqRQVrXyVvX2kPAu32NxyVX8rpIBcm0ZKbqD6/O9Hg12YkOxJEwQDkm16N ioD8UycUcke3+zbLEjP//ofzy6oPJ0pMivYJ1+aJsRj8gdLd+oKAFCI05WhVApndhtr2 Zntq005uMEy5bkfghRTnI3DQXxPtXaSzGNfzSDfC943Qy5in2505c1lCe14d/e3P/D20 6EEv0UDdxetJOpggUOvBO7ZaQNJzu4Ft9e2jJa5Uk2rRGsEffLUHqqTKG8FbmodqWT3G 0FSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=cRor0N4gOD0Qn0KlazW1SZvTMoUzuhm3/mb+BmIYicQ=; b=bYgxb/7jBGCEJ3yDXYTsLnkWn1YCs9iq0Tr8IZIJKlvOKGr7m8x5UaEige2UyxN0CA qZjlc3gGspbepduGQRFWpsYXma5GifkLAsDVmC/b3OIwe6WDtQNzbm0T4WJP/45g4WAm ZLBE5b2J230djs9PvbwoWjrrVMcy9pRa/dt6Y1SCKVLmpO5MQrkdtyIEpUreGH+2pDr6 vjRwWLYm/m9TstEyVNJtTSD3Hn04Wzj9e8l/iZGrnNAjFlcB4vnb99dx6pVuzlI/WtZN ApgfTnIobNY2+1gVoMOSvgbY0zPsZrnSMAmdE/UsMoej2urvQLfIY2xBgR9zxlo3YK+2 OmbQ== X-Gm-Message-State: ALoCoQkSchiQWUw2J4sFDzCZnuUxJUeWKlKDpaSrbYBFheSDsPobN14Vp625KEuyha8fNWtSVTDJ X-Received: by 10.66.124.135 with SMTP id mi7mr46398919pab.102.1449520610085; Mon, 07 Dec 2015 12:36:50 -0800 (PST) Received: from dmatlack-linux.kir.corp.google.com ([172.31.88.63]) by smtp.gmail.com with ESMTPSA id g8sm36463659pfd.40.2015.12.07.12.36.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 07 Dec 2015 12:36:49 -0800 (PST) From: David Matlack To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jan.kiszka@siemens.com, David Matlack Subject: [PATCH kvm-unit-tests] x86: always inline functions called after set_exception_return Date: Mon, 7 Dec 2015 12:36:41 -0800 Message-Id: <1449520601-31507-1-git-send-email-dmatlack@google.com> X-Mailer: git-send-email 2.6.0.rc2.230.g3dd15c0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP set_exception_return forces exceptions handlers to return to a specific address instead of returning to the instruction address pushed by the CPU at the time of the exception. The unit tests apic.c and vmx.c use this functionality to recover from expected exceptions. When using set_exception_return we have to be careful not to modify the stack (such as by doing a function call) as triggering the exception will likely jump us past the instructions which undo the stack manipulation (such as a ret). To accomplish this, declare all functions called after set_exception_return as __always_inline, so that the compiler always inlines them. Signed-off-by: David Matlack --- lib/libcflat.h | 4 ++++ lib/x86/processor.h | 2 +- x86/vmx.c | 4 ++-- 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/lib/libcflat.h b/lib/libcflat.h index 9747ccd..9ffb5db 100644 --- a/lib/libcflat.h +++ b/lib/libcflat.h @@ -27,6 +27,10 @@ #define __unused __attribute__((__unused__)) +#ifndef __always_inline +# define __always_inline inline __attribute__((always_inline)) +#endif + #define xstr(s) xxstr(s) #define xxstr(s) #s diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 95cea1a..c4bc64f 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -149,7 +149,7 @@ static inline u64 rdmsr(u32 index) return a | ((u64)d << 32); } -static inline void wrmsr(u32 index, u64 val) +static __always_inline void wrmsr(u32 index, u64 val) { u32 a = val, d = val >> 32; asm volatile ("wrmsr" : : "a"(a), "d"(d), "c"(index) : "memory"); diff --git a/x86/vmx.c b/x86/vmx.c index f05cd33..28cd349 100644 --- a/x86/vmx.c +++ b/x86/vmx.c @@ -117,7 +117,7 @@ static void __attribute__((__used__)) syscall_handler(u64 syscall_no) current->syscall_handler(syscall_no); } -static inline int vmx_on() +static __always_inline int vmx_on() { bool ret; u64 rflags = read_rflags() | X86_EFLAGS_CF | X86_EFLAGS_ZF; @@ -126,7 +126,7 @@ static inline int vmx_on() return ret; } -static inline int vmx_off() +static __always_inline int vmx_off() { bool ret; u64 rflags = read_rflags() | X86_EFLAGS_CF | X86_EFLAGS_ZF;