From patchwork Mon Feb 5 15:34:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10200739 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DACC86056A for ; Mon, 5 Feb 2018 15:34:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA8DC28727 for ; Mon, 5 Feb 2018 15:34:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BE81628776; Mon, 5 Feb 2018 15:34:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 45DAD28727 for ; Mon, 5 Feb 2018 15:34:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4J1njNht6kXq80AR2A3+qY0yyau3CuMo59Fy0AT6Ofw=; b=npLb55TRZODzXBjcVx/ij8AIIW 5C3W/TRQ4QAicyUwRQRLPE26mYj+V9IzwWrm9N78i65hi7xkqUu6dvYZ0DWu2JWQwZJNsYXkfgQg8 pTr5b2WTUIKtgQJk1EcVDeS24urZqZgf4ia3GRSFb+gBfFAebB7kNxl6Obtc3WzWF/wueh6v+Nuzy axfTmbq/86siuq73j6eRV6WM8rA1jAz3pgcQ2wdu/6cCkdglAhnOXQfEXaDmgRqp92yVfdMRET1eO 4x8PCO+3bHti4gLz6jkqC2USz54ni1XvOiW3tJISqTziV9NJ0o2BnUBpR+LgzdXqgqxmHwUpijZtC iQVkYZGw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eiin0-0005PU-Ok; Mon, 05 Feb 2018 15:34:38 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eiimx-0005Mi-6y for linux-arm-kernel@lists.infradead.org; Mon, 05 Feb 2018 15:34:36 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3639015BF; Mon, 5 Feb 2018 07:34:23 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 085FE3F25C; Mon, 5 Feb 2018 07:34:23 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 8EB541AE2FE9; Mon, 5 Feb 2018 15:34:26 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 4/9] arm64: Use pointer masking to limit uaccess speculation Date: Mon, 5 Feb 2018 15:34:19 +0000 Message-Id: <1517844864-15887-5-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1517844864-15887-1-git-send-email-will.deacon@arm.com> References: <1517844864-15887-1-git-send-email-will.deacon@arm.com> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, keescook@chromium.org, catalin.marinas@arm.com, Will Deacon , dan.j.williams@intel.com, robin.murphy@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Robin Murphy Similarly to x86, mitigate speculation past an access_ok() check by masking the pointer against the address limit before use. Even if we don't expect speculative writes per se, it is plausible that a CPU may still speculate at least as far as fetching a cache line for writing, hence we also harden put_user() and clear_user() for peace of mind. Signed-off-by: Robin Murphy Signed-off-by: Will Deacon --- arch/arm64/include/asm/uaccess.h | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index f2fc026cffb4..e49fe723d72d 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -228,6 +228,26 @@ static inline void uaccess_enable_not_uao(void) } /* + * Sanitise a uaccess pointer such that it becomes NULL if above the + * current addr_limit. + */ +#define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr) +static inline void __user *__uaccess_mask_ptr(const void __user *ptr) +{ + void __user *safe_ptr; + + asm volatile( + " bics xzr, %1, %2\n" + " csel %0, %1, xzr, eq\n" + : "=&r" (safe_ptr) + : "r" (ptr), "r" (current_thread_info()->addr_limit) + : "cc"); + + csdb(); + return safe_ptr; +} + +/* * The "__xxx" versions of the user access functions do not verify the address * space - it must have been done previously with a separate "access_ok()" * call. @@ -297,7 +317,7 @@ do { \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ access_ok(VERIFY_READ, __p, sizeof(*__p)) ? \ - __get_user((x), __p) : \ + __p = uaccess_mask_ptr(__p), __get_user((x), __p) : \ ((x) = 0, -EFAULT); \ }) @@ -361,7 +381,7 @@ do { \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ? \ - __put_user((x), __p) : \ + __p = uaccess_mask_ptr(__p), __put_user((x), __p) : \ -EFAULT; \ }) @@ -377,7 +397,7 @@ extern unsigned long __must_check __clear_user(void __user *addr, unsigned long static inline unsigned long __must_check clear_user(void __user *to, unsigned long n) { if (access_ok(VERIFY_WRITE, to, n)) - n = __clear_user(to, n); + n = __clear_user(__uaccess_mask_ptr(to), n); return n; }