From patchwork Thu Dec 10 00:22:13 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King - ARM Linux X-Patchwork-Id: 7813881 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CC90CBEEE1 for ; Thu, 10 Dec 2015 00:24:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 840972054C for ; Thu, 10 Dec 2015 00:24:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60B1D202B4 for ; Thu, 10 Dec 2015 00:24:46 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a6p0W-0006ge-Dm; Thu, 10 Dec 2015 00:22:52 +0000 Received: from pandora.arm.linux.org.uk ([2001:4d48:ad52:3201:214:fdff:fe10:1be6]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1a6p0Q-0006Yk-VR for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2015 00:22:48 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora-2014; h=Sender:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date; bh=GgVpgxvF7HdT7zsbg4nZi5ehXeZVFP65DAgtOTWl6AM=; b=EnsjDoHC+f1pZcaFznEigW+gdMUsnhzc3vIrBHBiWosA0sJ1gMhwrGqUwtVd11ASV2rclx9S1OEzIipqQ4HPsusNLxciBvLiOouZ5YvCaqMPGmfzeXVxBDQHSnIWZ3/6qyA/I6WRPUr0bn9KZUCVqCdX6Wae9wRZ9S7SnNCYVhA=; Received: from n2100.arm.linux.org.uk ([2002:4e20:1eda:1:214:fdff:fe10:4f86]:43051) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256) (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1a6ozy-000892-A0; Thu, 10 Dec 2015 00:22:18 +0000 Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76) (envelope-from ) id 1a6ozu-0005LY-0f; Thu, 10 Dec 2015 00:22:14 +0000 Date: Thu, 10 Dec 2015 00:22:13 +0000 From: Russell King - ARM Linux To: Peter Rosin Subject: Re: Domain faults when CONFIG_CPU_SW_DOMAIN_PAN is enabled Message-ID: <20151210002213.GL8644@n2100.arm.linux.org.uk> References: <1267b061b66c4d1fa8af1955825bc97a@EMAIL.axentia.se> <20151203110041.GK8644@n2100.arm.linux.org.uk> <20151203115143.GM8644@n2100.arm.linux.org.uk> <533b0f1f264e42e78b94bbff9570fb50@EMAIL.axentia.se> <20151203133727.GN8644@n2100.arm.linux.org.uk> <94580382ca344ae2b64f66fb778c6ff7@EMAIL.axentia.se> <20151203164118.GR8644@n2100.arm.linux.org.uk> <20151203172708.GT8644@n2100.arm.linux.org.uk> <45b615420548463ebd1d582bc5da2eff@EMAIL.axentia.se> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <45b615420548463ebd1d582bc5da2eff@EMAIL.axentia.se> User-Agent: Mutt/1.5.23 (2014-03-12) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151209_162247_436164_0771B207 X-CRM114-Status: GOOD ( 20.69 ) X-Spam-Score: -4.3 (----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Will Deacon , "'linux-kernel@vger.kernel.org'" , "'linux-arm-kernel@lists.infradead.org'" , "nico@fluxnic.net" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Thu, Dec 03, 2015 at 09:37:51PM +0000, Peter Rosin wrote: > I took both patches for a quick spin (a dozen boots and one hour uptime > after that for each patch) and no incidents. I have not gathered data, > but the crash on boot feels like it's quite a bit above 50% when there > is a problem so this feels good (I used 5 clean reboots when I bisected > and that worked). > > Reported-by: Peter Rosin > Tested-by: Peter Rosin > > (and please don't forget to cc stable) I've decided to do a more in-depth fix, so that we also solve the issue that when we schedule in these down_read()s, we don't leak the permissive domain register setting into the switched-to context. Can you test this patch please? Thanks. 8<==== Subject: [PATCH] ARM: fix uaccess_with_memcpy() with SW_DOMAIN_PAN The uaccess_with_memcpy() code is currently incompatible with the SW PAN code: it takes locks within the region that we've changed the DACR, potentially sleeping as a result. As we do not save and restore the DACR across co-operative sleep events, can lead to an incorrect DACR value later in this code path. Signed-off-by: Russell King Reported-by: Peter Rosin Tested-by: Peter Rosin --- arch/arm/include/asm/uaccess.h | 4 ++++ arch/arm/lib/uaccess_with_memcpy.c | 29 +++++++++++++++++++++++------ 2 files changed, 27 insertions(+), 6 deletions(-) diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 8cc85a4ebec2..35c9db857ebe 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -510,10 +510,14 @@ __copy_to_user_std(void __user *to, const void *from, unsigned long n); static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n) { +#ifndef CONFIG_UACCESS_WITH_MEMCPY unsigned int __ua_flags = uaccess_save_and_enable(); n = arm_copy_to_user(to, from, n); uaccess_restore(__ua_flags); return n; +#else + return arm_copy_to_user(to, from, n); +#endif } extern unsigned long __must_check diff --git a/arch/arm/lib/uaccess_with_memcpy.c b/arch/arm/lib/uaccess_with_memcpy.c index d72b90905132..588bbc288396 100644 --- a/arch/arm/lib/uaccess_with_memcpy.c +++ b/arch/arm/lib/uaccess_with_memcpy.c @@ -88,6 +88,7 @@ pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp) static unsigned long noinline __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n) { + unsigned long ua_flags; int atomic; if (unlikely(segment_eq(get_fs(), KERNEL_DS))) { @@ -118,7 +119,9 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n) if (tocopy > n) tocopy = n; + ua_flags = uaccess_save_and_enable(); memcpy((void *)to, from, tocopy); + uaccess_restore(ua_flags); to += tocopy; from += tocopy; n -= tocopy; @@ -145,14 +148,21 @@ arm_copy_to_user(void __user *to, const void *from, unsigned long n) * With frame pointer disabled, tail call optimization kicks in * as well making this test almost invisible. */ - if (n < 64) - return __copy_to_user_std(to, from, n); - return __copy_to_user_memcpy(to, from, n); + if (n < 64) { + unsigned long ua_flags = uaccess_save_and_enable(); + n = __copy_to_user_std(to, from, n); + uaccess_restore(ua_flags); + } else { + n = __copy_to_user_memcpy(to, from, n); + } + return n; } static unsigned long noinline __clear_user_memset(void __user *addr, unsigned long n) { + unsigned long ua_flags; + if (unlikely(segment_eq(get_fs(), KERNEL_DS))) { memset((void *)addr, 0, n); return 0; @@ -175,7 +185,9 @@ __clear_user_memset(void __user *addr, unsigned long n) if (tocopy > n) tocopy = n; + ua_flags = uaccess_save_and_enable(); memset((void *)addr, 0, tocopy); + uaccess_restore(ua_flags); addr += tocopy; n -= tocopy; @@ -193,9 +205,14 @@ __clear_user_memset(void __user *addr, unsigned long n) unsigned long arm_clear_user(void __user *addr, unsigned long n) { /* See rational for this in __copy_to_user() above. */ - if (n < 64) - return __clear_user_std(addr, n); - return __clear_user_memset(addr, n); + if (n < 64) { + unsigned long ua_flags = uaccess_save_and_enable(); + n = __clear_user_std(addr, n); + uaccess_restore(ua_flags); + } else { + n = __clear_user_memset(addr, n); + } + return n; } #if 0