From patchwork Wed Jun 11 20:22:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 4338341 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7E014BEEAA for ; Wed, 11 Jun 2014 20:25:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9A2B5202DD for ; Wed, 11 Jun 2014 20:25:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4670A2025B for ; Wed, 11 Jun 2014 20:25:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wup3i-0000bX-19; Wed, 11 Jun 2014 20:23:46 +0000 Received: from mail-pd0-f177.google.com ([209.85.192.177]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wup3W-0000Ia-Qr for linux-arm-kernel@lists.infradead.org; Wed, 11 Jun 2014 20:23:35 +0000 Received: by mail-pd0-f177.google.com with SMTP id g10so173007pdj.8 for ; Wed, 11 Jun 2014 13:23:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=bPpXeWfdn4RqoA0NR7d0M+7bWPN8FKNiGXL45qvTRaw=; b=PzxEQMm7KRJ4q4ZpHsORmwVTpL7JJS3IChKgNL78TMliAgltaPQA9ryHH3MpOV/ms1 iuVUJc6qvSs8qOymZXu5I80FdG898B6GBUR7qLSfWv1cqrCAaILU84nMzp/1IbOcCmP+ 9pcO65YCTu7p0esk+8LrISDMe2t6JHF7LJRXMCpQnHMO7xC5i6iO0wXT+FX+R2HdoTR+ MVOR7EGL95cHaA+X637y/DFTC7znFftdC2SjT80pFZ6odZogYKXWU/43vjGbV9K9sGDa CKofSj38meivMTsWSeIaZyVGOeiJRQMs/53Qy2wxRzSlSZKKyq5eVUBwsaT6CpVgKljy krlQ== X-Gm-Message-State: ALoCoQkmHOW8zs2+356SKfddBk6vjY00VyWZfGgMy12Qk6m99y7KIZ6mgWJAiV18i0s42i4gWMqC X-Received: by 10.66.254.166 with SMTP id aj6mr16409340pad.11.1402518192798; Wed, 11 Jun 2014 13:23:12 -0700 (PDT) Received: from localhost (50-76-60-73-ip-static.hfc.comcastbusiness.net. [50.76.60.73]) by mx.google.com with ESMTPSA id jd5sm76576158pbb.18.2014.06.11.13.23.11 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jun 2014 13:23:12 -0700 (PDT) From: Andy Lutomirski To: linux-kernel@vger.kernel.org, Kees Cook , Will Drewry Subject: [RFC 2/5] x86_64, entry: Treat regs->ax the same in fastpath and slowpath syscalls Date: Wed, 11 Jun 2014 13:22:59 -0700 Message-Id: X-Mailer: git-send-email 1.9.3 In-Reply-To: References: In-Reply-To: References: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140611_132334_973311_13A98755 X-CRM114-Status: GOOD ( 11.48 ) X-Spam-Score: -0.0 (/) Cc: linux-arch@vger.kernel.org, linux-mips@linux-mips.org, x86@kernel.org, Oleg Nesterov , Andy Lutomirski , linux-security-module@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For slowpath syscalls, we initialize regs->ax to -ENOSYS and stick the syscall number into regs->orig_ax prior to any possible tracing and syscall execution. This is user-visible ABI used by ptrace syscall emulation and seccomp. For fastpath syscalls, there's no good reason not to do the same thing. It's even slightly simpler than what we're currently doing. It probably has no measureable performance impact. It should have no user-visible effect. The purpose of this patch is to prepare for seccomp-based syscall emulation in the fast path. This change is just subtle enough that I'm keeping it separate. Signed-off-by: Andy Lutomirski --- arch/x86/include/asm/calling.h | 6 +++++- arch/x86/kernel/entry_64.S | 11 +++-------- 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/calling.h b/arch/x86/include/asm/calling.h index cb4c73b..76659b6 100644 --- a/arch/x86/include/asm/calling.h +++ b/arch/x86/include/asm/calling.h @@ -85,7 +85,7 @@ For 32-bit we have the following conventions - kernel is built with #define ARGOFFSET R11 #define SWFRAME ORIG_RAX - .macro SAVE_ARGS addskip=0, save_rcx=1, save_r891011=1 + .macro SAVE_ARGS addskip=0, save_rcx=1, save_r891011=1, rax_enosys=0 subq $9*8+\addskip, %rsp CFI_ADJUST_CFA_OFFSET 9*8+\addskip movq_cfi rdi, 8*8 @@ -96,7 +96,11 @@ For 32-bit we have the following conventions - kernel is built with movq_cfi rcx, 5*8 .endif + .if \rax_enosys + movq $-ENOSYS, 4*8(%rsp) + .else movq_cfi rax, 4*8 + .endif .if \save_r891011 movq_cfi r8, 3*8 diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index 1e96c36..f9e713a 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -611,8 +611,8 @@ GLOBAL(system_call_after_swapgs) * and short: */ ENABLE_INTERRUPTS(CLBR_NONE) - SAVE_ARGS 8,0 - movq %rax,ORIG_RAX-ARGOFFSET(%rsp) + SAVE_ARGS 8, 0, rax_enosys=1 + movq_cfi rax,(ORIG_RAX-ARGOFFSET) movq %rcx,RIP-ARGOFFSET(%rsp) CFI_REL_OFFSET rip,RIP-ARGOFFSET testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) @@ -624,7 +624,7 @@ system_call_fastpath: andl $__SYSCALL_MASK,%eax cmpl $__NR_syscall_max,%eax #endif - ja badsys + ja ret_from_sys_call /* and return regs->ax */ movq %r10,%rcx call *sys_call_table(,%rax,8) # XXX: rip relative movq %rax,RAX-ARGOFFSET(%rsp) @@ -683,10 +683,6 @@ sysret_signal: FIXUP_TOP_OF_STACK %r11, -ARGOFFSET jmp int_check_syscall_exit_work -badsys: - movq $-ENOSYS,RAX-ARGOFFSET(%rsp) - jmp ret_from_sys_call - #ifdef CONFIG_AUDITSYSCALL /* * Fast path for syscall audit without full syscall trace. @@ -726,7 +722,6 @@ tracesys: jz auditsys #endif SAVE_REST - movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */ FIXUP_TOP_OF_STACK %rdi movq %rsp,%rdi call syscall_trace_enter