From patchwork Fri Sep 18 12:46:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 11784993 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 150206CB for ; Fri, 18 Sep 2020 12:49:56 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2C8FA21D7B for ; Fri, 18 Sep 2020 12:49:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="GZaGD9Mm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C8FA21D7B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arndb.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=jlICOfdRMiOFukw58Ky+cyPrP9syViNTC8A9s5GjUNw=; b=GZaGD9MmNwXk8j/wL4yWRVANR UygwKm9UXJP8tehpgaAoNznttAbtgPH8rF+D7Bdxy+RHix6FrAC8W7ucdweVJpZp6BNfrWp1P4ERh xiSkEzMIQgVSm9okCUkvPf3E+IsfObO3sw0nPAx3jKWOIhi8CmQreKILUkOKfZ+Yn/zNGnSbA9yU2 N5P5iotAIaj6NxIte0RDm9eezNSecGh3pDz0I1/Xt3uJP1+qusEwSBR6SgeBbATqx2BCO86RWQ94g CvgrP8iuWsaLUPlK1uD/wpImooh0vvV4vkH4Cnuf9xTKgiCNbwlyywXJ19uZyScAB9MlHISf2ewvC zpBlK7jZA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJFpd-0007SE-CH; Fri, 18 Sep 2020 12:49:41 +0000 Received: from mout.kundenserver.de ([212.227.126.187]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJFmo-00069S-2C for linux-arm-kernel@lists.infradead.org; Fri, 18 Sep 2020 12:46:54 +0000 Received: from threadripper.lan ([149.172.98.151]) by mrelayeu.kundenserver.de (mreue011 [212.227.15.129]) with ESMTPA (Nemesis) id 1Mr8zO-1knLTA2coI-00oGkp; Fri, 18 Sep 2020 14:46:34 +0200 From: Arnd Bergmann To: Christoph Hellwig , Russell King , Alexander Viro Subject: [PATCH v2 1/9] mm/maccess: fix unaligned copy_{from, to}_kernel_nofault Date: Fri, 18 Sep 2020 14:46:16 +0200 Message-Id: <20200918124624.1469673-2-arnd@arndb.de> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200918124624.1469673-1-arnd@arndb.de> References: <20200918124624.1469673-1-arnd@arndb.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:aM+IxhTWQ8zLEy6I5xMM14Ev05fAqzi2N6KWBwjGmrTs5WlEUtT h3d/Ek0+MD7Kg+UFacJgz6oWfZKeIYwr4MM2bOz3usLS6zOvlxqXzO+j1mOgipiLpoWR2SD 5K6CrFeWW5//N3cy9JMAg2AO19WmtU3iNg006du/rVt/cP2Aa1dqQ0+GdFtoh+I1fO8EW4r aTvEWserNBkjpzkGWOOuQ== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:t7msc0UXfsQ=:yaYfo6Eb6GxlQw0v2fCfgT qfpkjhEkog+a5swVrJWRLI0F60ZAoQa+U72qb8Dm4gq3x2tUgEn4wddszH62L6tjKGbUgg0ma vv2qlf2F5gMIXlIEKTRPr7MP0E7h658vOczUAtSP9ULseis7pnmHdXl5mY1o8bv6x2h+moCol pUZKxC2avRCwspOZYSqaWOJqTaecq5Kp5j38nJIpwfH7GepL4JmeElY6K+prcigvu2b/KGzLt rzR8UxhOt3ILMlL4zIcQ83t35YaRRce7iFh29RIWWV5RGuo7GS2sPr0Xqm0NCLNd5xsJirFmH d6q07QmGQzg9HIkWjyEz6ochFwmR8Bbds6hT8jb+LYefWbdMwlEaG3yebGqvR2AbSgrAGySGx bwgquAH3VI8ifEi31v601nWLoPohbfUDjLLvxZP/ud2cPnJFir3ntvThVPvaxEhpFrZJbvimP u2wy7qe9bqtokp4OHIx5D8mqu0XpgDYFq//rCsYqg+1jifdcfcSomDujlg45g0eM9fgCXDG68 Gn54qF2GMix5PSMOWWE+IQxHzRQXj5GUghUjAG/wxM2JD+juq6t9ul7AJKeotoue5gtBfVEJe 92lmPtwB87ccXhEbD6OrywRci3W5jl3s6YwNIgsv6XQU7/UjOa7Gz4fpNDIm5dktfQwfjFUXv nayGdJc/vWr86qx+LItPMChsRCRpcyHqxxA/esHHJeCeyohXYCseWppKuP/1f2hAAkq8BIGy4 Uj337rpz5iNoUmGJU2x/XjiYqPziCVKKhjYFv6Tg/SL4BO0pV6d8l2eBrFUJiVpY6GlcjXKdz 3yxbq0G6dBJ/N50FfWidrUtiDKawO2xbzBq8KoqXgMPql3f1q2P6a17UKjqNjxNFnjrnBkd X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200918_084646_466739_A89008B5 X-CRM114-Status: GOOD ( 15.60 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [212.227.126.187 listed in wl.mailspike.net] -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [212.227.126.187 listed in list.dnswl.org] 0.0 SPF_NONE SPF: sender does not publish an SPF Record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Arnd Bergmann , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On machines such as ARMv5 that trap unaligned accesses, these two functions can be slow when each access needs to be emulated, or they might not work at all. Change them so that each loop is only used when both the src and dst pointers are naturally aligned. Reviewed-by: Christoph Hellwig Signed-off-by: Arnd Bergmann --- mm/maccess.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/mm/maccess.c b/mm/maccess.c index 3bd70405f2d8..d3f1a1f0b1c1 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, long copy_from_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + if (!copy_from_kernel_nofault_allowed(src, size)) return -ERANGE; pagefault_disable(); - copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); copy_from_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0; @@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); long copy_to_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + pagefault_disable(); - copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); copy_to_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0;