From patchwork Thu Oct 1 14:12:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 11811515 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5422392C for ; Thu, 1 Oct 2020 14:13:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A6BE221EF for ; Thu, 1 Oct 2020 14:13:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A6BE221EF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arndb.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E7446B0072; Thu, 1 Oct 2020 10:12:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BB3DA900004; Thu, 1 Oct 2020 10:12:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F87D8E0003; Thu, 1 Oct 2020 10:12:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 2C3F66B0074 for ; Thu, 1 Oct 2020 10:12:57 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A04153631 for ; Thu, 1 Oct 2020 14:12:56 +0000 (UTC) X-FDA: 77323547952.15.robin08_060cd9e2719c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 647F418030C8C for ; Thu, 1 Oct 2020 14:12:56 +0000 (UTC) X-Spam-Summary: 1,0,0,b5fa4b3d1d2a435d,d41d8cd98f00b204,arnd@arndb.de,,RULES_HIT:41:355:379:541:800:960:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2901:3138:3139:3140:3141:3142:3353:3865:3867:3868:3871:3874:4321:5007:6119:6261:7903:8603:10004:11026:11473:11658:11914:12043:12114:12160:12291:12297:12438:12555:12895:13894:14096:14181:14394:14721:21080:21451:21627:21740:21990:30054:30070:30079,0,RBL:217.72.192.73:@arndb.de:.lbl8.mailshell.net-62.8.6.100 64.201.201.201;04y8scd5wdydn76huo4s5okhhoxfqochm4hkpyigu7f7ybpy85bw9cxpjdeh4ek.dcj4a3ueugthgb94iouind8i5yprszbzf44mia8gycfyj69em5owt19kg8yujjy.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:20,LUA_SUMMARY:none X-HE-Tag: robin08_060cd9e2719c X-Filterd-Recvd-Size: 4728 Received: from mout.kundenserver.de (mout.kundenserver.de [217.72.192.73]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Thu, 1 Oct 2020 14:12:55 +0000 (UTC) Received: from threadripper.lan ([46.223.126.90]) by mrelayeu.kundenserver.de (mreue108 [212.227.15.145]) with ESMTPA (Nemesis) id 1N7zNt-1kSR3A338F-0153Zr; Thu, 01 Oct 2020 16:12:51 +0200 From: Arnd Bergmann To: Russell King , Christoph Hellwig Cc: Alexander Viro , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Arnd Bergmann Subject: [PATCH v3 01/10] mm/maccess: fix unaligned copy_{from,to}_kernel_nofault Date: Thu, 1 Oct 2020 16:12:24 +0200 Message-Id: <20201001141233.119343-2-arnd@arndb.de> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201001141233.119343-1-arnd@arndb.de> References: <20201001141233.119343-1-arnd@arndb.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:JIkXMdpDR0ur20R0TfErz+SS7ek7hvlvdV08ZHXeS+8QM4q5qK6 oT9S3UUAu8+kuKQ310bTrvgZsTPi8oFqjkMnDCgDogD5CTA1129uIhJSRz66AIMQ6yGzpwV VZ7gSDOe+HZl0q0ILFE1GDKaBjPyI0M+DNF+78LaWKBU+07VvQz5UeP0edPN78w8PRlI3rm B1KVNZjXcwcfTqz40kzbw== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:qrUogoUvLLw=:+pbjXbqQxUCpKCme1RZhl2 zc7bJzX+2DAGkGh5VmN+7gSpTBJPbC7RolI8bMF2Z+DDea3ZJF0kR6Ld4B3RfHVd7vPkaUMdp 0WNymXhXx9vd7jZDICIyZnrsBgtxCbzP6LcRMb4SqZhoxOh8wfn+W4eOuPpBuDI5WGBRc9MDO ixnj2wBXKRn+HqufENoX5QQHVVntKLkjakka7wvZvZLtdl4EqZDo6/CjGsmQGxtqA9kRHlXV0 sr74KwumQ3Xz6zPbZgANxK06SwKVY/My6ROTRnnrIBDI/cBpMXATnpv5phPBGDFSacF9dD5JJ MzPNeuOItveG9UFf35QfuUyer+jFhh5+c9ObUBKal9r8zByMU+MCHHc5TgqIut6/rfmRHgLm2 Sir1y2RGYkDtmy0tqy69PraNN5UdK30CrCGbzFtwem0Xqg6+Am2xwZe6zvdErMVmmoMLe4mE8 iNv7MGsq3+S9z+0FP3drVF0zuapLhZlZ6Rh4JYT+ehx/Q6CUkDmXC75hnG+ai94YFqkJPb6TL rfoz2Exakbgw+hN6w7QJceYdHmDsTkluvtyvFleqZqH9w42ZOGiE54PSm4btuAcKS+gXEbEPa uGmo8WsfKwITSzMLnlPhmghA1NiwJs3P7E2IcvmyL67yRlcSLrV4VPfPkYmjOKRjK1AdmoMTe cj4x989Ytn+yfkYcdMT2v3CA0U/KIilQZb9AYID1UvZOBm53SCy50RwDtw8DYd/vFC91+4WH6 K2/pebXwCupnWz1vbQk4cDs4H47oFOSHca74ZXpDQH8HDxeRQrvhaBgeudfixDl6iiSZOVCke 0wTJXd3yoRzjMH5PXfMOcM4a1cFCkerlUapMYjUCpadICOCeAEuHGAiIvHtnRwtHtC7NLJL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On machines such as ARMv5 that trap unaligned accesses, these two functions can be slow when each access needs to be emulated, or they might not work at all. Change them so that each loop is only used when both the src and dst pointers are naturally aligned. Reviewed-by: Christoph Hellwig Signed-off-by: Arnd Bergmann --- mm/maccess.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/mm/maccess.c b/mm/maccess.c index 3bd70405f2d8..d3f1a1f0b1c1 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, long copy_from_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + if (!copy_from_kernel_nofault_allowed(src, size)) return -ERANGE; pagefault_disable(); - copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); copy_from_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0; @@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); long copy_to_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + pagefault_disable(); - copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); copy_to_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0;