From patchwork Thu Oct 1 14:12:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 11811539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08D0892C for ; Thu, 1 Oct 2020 14:15:24 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE21420872 for ; Thu, 1 Oct 2020 14:15:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="POlyrGMY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE21420872 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arndb.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=jlICOfdRMiOFukw58Ky+cyPrP9syViNTC8A9s5GjUNw=; b=POlyrGMYvEKGM0R3RFkmpNAjT Kyd9327rbN1T913NJXNBQtv0hCRrHSAuR/+qzxxczVrn4DeYgl5bmnilHg7oLNWV/p8+0rQ8KnaOs P7Ugi7Lvu6hbOHi4lF0Eyi0Kr11WU0q3/Wv05/vyMXGrksxbwLoVuAFHe8AwQG1NgOSIXicDWwIK4 qu/RvQJ56yiMqAFH+1LJe0RbjnlTPD/0A86YcmVLa1DO1zwa4Lu1t2nOiz/BsxdU/3HpiLq2P561q UJ4YY7T5XB8KfhNQK8KP+oWgB1UQcJ9qYonVf0ABe5RYNQbM7tUqSVPsRtCu/zW5CIqgQ5f4ZwusS 2ehDTWI1A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNzKv-0001Ui-Lf; Thu, 01 Oct 2020 14:13:33 +0000 Received: from mout.kundenserver.de ([217.72.192.73]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNzKO-0001B0-H4 for linux-arm-kernel@lists.infradead.org; Thu, 01 Oct 2020 14:13:04 +0000 Received: from threadripper.lan ([46.223.126.90]) by mrelayeu.kundenserver.de (mreue108 [212.227.15.145]) with ESMTPA (Nemesis) id 1N7zNt-1kSR3A338F-0153Zr; Thu, 01 Oct 2020 16:12:51 +0200 From: Arnd Bergmann To: Russell King , Christoph Hellwig Subject: [PATCH v3 01/10] mm/maccess: fix unaligned copy_{from, to}_kernel_nofault Date: Thu, 1 Oct 2020 16:12:24 +0200 Message-Id: <20201001141233.119343-2-arnd@arndb.de> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201001141233.119343-1-arnd@arndb.de> References: <20201001141233.119343-1-arnd@arndb.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:JIkXMdpDR0ur20R0TfErz+SS7ek7hvlvdV08ZHXeS+8QM4q5qK6 oT9S3UUAu8+kuKQ310bTrvgZsTPi8oFqjkMnDCgDogD5CTA1129uIhJSRz66AIMQ6yGzpwV VZ7gSDOe+HZl0q0ILFE1GDKaBjPyI0M+DNF+78LaWKBU+07VvQz5UeP0edPN78w8PRlI3rm B1KVNZjXcwcfTqz40kzbw== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:qrUogoUvLLw=:+pbjXbqQxUCpKCme1RZhl2 zc7bJzX+2DAGkGh5VmN+7gSpTBJPbC7RolI8bMF2Z+DDea3ZJF0kR6Ld4B3RfHVd7vPkaUMdp 0WNymXhXx9vd7jZDICIyZnrsBgtxCbzP6LcRMb4SqZhoxOh8wfn+W4eOuPpBuDI5WGBRc9MDO ixnj2wBXKRn+HqufENoX5QQHVVntKLkjakka7wvZvZLtdl4EqZDo6/CjGsmQGxtqA9kRHlXV0 sr74KwumQ3Xz6zPbZgANxK06SwKVY/My6ROTRnnrIBDI/cBpMXATnpv5phPBGDFSacF9dD5JJ MzPNeuOItveG9UFf35QfuUyer+jFhh5+c9ObUBKal9r8zByMU+MCHHc5TgqIut6/rfmRHgLm2 Sir1y2RGYkDtmy0tqy69PraNN5UdK30CrCGbzFtwem0Xqg6+Am2xwZe6zvdErMVmmoMLe4mE8 iNv7MGsq3+S9z+0FP3drVF0zuapLhZlZ6Rh4JYT+ehx/Q6CUkDmXC75hnG+ai94YFqkJPb6TL rfoz2Exakbgw+hN6w7QJceYdHmDsTkluvtyvFleqZqH9w42ZOGiE54PSm4btuAcKS+gXEbEPa uGmo8WsfKwITSzMLnlPhmghA1NiwJs3P7E2IcvmyL67yRlcSLrV4VPfPkYmjOKRjK1AdmoMTe cj4x989Ytn+yfkYcdMT2v3CA0U/KIilQZb9AYID1UvZOBm53SCy50RwDtw8DYd/vFC91+4WH6 K2/pebXwCupnWz1vbQk4cDs4H47oFOSHca74ZXpDQH8HDxeRQrvhaBgeudfixDl6iiSZOVCke 0wTJXd3yoRzjMH5PXfMOcM4a1cFCkerlUapMYjUCpadICOCeAEuHGAiIvHtnRwtHtC7NLJL X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201001_101300_853101_381B436D X-CRM114-Status: GOOD ( 15.04 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [217.72.192.73 listed in wl.mailspike.net] -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [217.72.192.73 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.0 SPF_NONE SPF: sender does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Arnd Bergmann , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Viro , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On machines such as ARMv5 that trap unaligned accesses, these two functions can be slow when each access needs to be emulated, or they might not work at all. Change them so that each loop is only used when both the src and dst pointers are naturally aligned. Reviewed-by: Christoph Hellwig Signed-off-by: Arnd Bergmann --- mm/maccess.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/mm/maccess.c b/mm/maccess.c index 3bd70405f2d8..d3f1a1f0b1c1 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, long copy_from_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + if (!copy_from_kernel_nofault_allowed(src, size)) return -ERANGE; pagefault_disable(); - copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); copy_from_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0; @@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); long copy_to_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + pagefault_disable(); - copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); copy_to_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0;