From patchwork Wed Jun 24 17:52:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 11623963 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A402D1392 for ; Wed, 24 Jun 2020 17:53:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7AEB42077D for ; Wed, 24 Jun 2020 17:53:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7AEB42077D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9907A6B0033; Wed, 24 Jun 2020 13:53:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 943996B0036; Wed, 24 Jun 2020 13:53:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 745F56B0037; Wed, 24 Jun 2020 13:53:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id 5C8106B0033 for ; Wed, 24 Jun 2020 13:53:36 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 19523181AC9CB for ; Wed, 24 Jun 2020 17:53:36 +0000 (UTC) X-FDA: 76964852832.16.goat37_5a138d126e46 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id C1A32100E6912 for ; Wed, 24 Jun 2020 17:53:35 +0000 (UTC) X-Spam-Summary: 1,0,0,28f4901bc37dd078,d41d8cd98f00b204,srs0=u3t/=af=arm.com=catalin.marinas@kernel.org,,RULES_HIT:41:355:379:421:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2553:2559:2562:2693:2901:2903:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6261:6742:7875:7903:8603:10004:11026:11232:11233:11658:11914:12043:12114:12291:12296:12297:12438:12517:12519:12555:12986:13161:13180:13229:13894:14096:14181:14394:14721:21080:21230:21451:21627:21740:21795:21990:30051:30054:30079:30090,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04ygpyf3xf1b5bbmi1rjs5ux6e33eyc4wrxopdssqs1jz4t6bq3479zp4ht64on.a6ydh5tqmk6i79t8njour9bj9rb1j4e93btu13nq5kjhgd1busmbcpjrnp64adc.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LU A_SUMMAR X-HE-Tag: goat37_5a138d126e46 X-Filterd-Recvd-Size: 4166 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Wed, 24 Jun 2020 17:53:35 +0000 (UTC) Received: from localhost.localdomain (unknown [2.26.170.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A9D472078E; Wed, 24 Jun 2020 17:53:32 +0000 (UTC) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, Will Deacon , Dave P Martin , Vincenzo Frascino , Szabolcs Nagy , Kevin Brodsky , Andrey Konovalov , Peter Collingbourne , Andrew Morton , Alexander Viro Subject: [PATCH v5 20/25] fs: Handle intra-page faults in copy_mount_options() Date: Wed, 24 Jun 2020 18:52:39 +0100 Message-Id: <20200624175244.25837-21-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200624175244.25837-1-catalin.marinas@arm.com> References: <20200624175244.25837-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C1A32100E6912 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The copy_mount_options() function takes a user pointer argument but no size. It tries to read up to a PAGE_SIZE. However, copy_from_user() is not guaranteed to return all the accessible bytes if, for example, the access crosses a page boundary and gets a fault on the second page. To work around this, the current copy_mount_options() implementation performs two copy_from_user() passes, first to the end of the current page and the second to what's left in the subsequent page. On arm64 with MTE enabled, access to a user page may trigger a fault after part of the buffer has been copied (when the user pointer tag, bits 56-59, no longer matches the allocation tag stored in memory). Allow copy_mount_options() to handle such intra-page faults by returning -EFAULT only if the first copy_from_user() has not copied any bytes. Signed-off-by: Catalin Marinas Cc: Alexander Viro Reviewed-by: Kevin Brodsky --- Notes: v4: - Rewrite to avoid arch_has_exact_copy_from_user() New in v3. fs/namespace.c | 24 +++++++++++++++++++++--- 1 file changed, 21 insertions(+), 3 deletions(-) diff --git a/fs/namespace.c b/fs/namespace.c index f30ed401cc6d..5b6a9c459674 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -3074,7 +3074,7 @@ static void shrink_submounts(struct mount *mnt) void *copy_mount_options(const void __user * data) { char *copy; - unsigned size; + unsigned size, left; if (!data) return NULL; @@ -3085,12 +3085,30 @@ void *copy_mount_options(const void __user * data) size = PAGE_SIZE - offset_in_page(data); - if (copy_from_user(copy, data, size)) { + /* + * Attempt to copy to the end of the first user page. On success, + * left == 0, copy the rest from the second user page (if it is + * accessible). copy_from_user() will zero the part of the kernel + * buffer not copied into. + * + * On architectures with intra-page faults (arm64 with MTE), the read + * from the first page may fail after copying part of the user data + * (left > 0 && left < size). Do not attempt the second copy in this + * case as the end of the valid user buffer has already been reached. + * Ensure, however, that the second part of the kernel buffer is + * zeroed. + */ + left = copy_from_user(copy, data, size); + if (left == size) { kfree(copy); return ERR_PTR(-EFAULT); } if (size != PAGE_SIZE) { - if (copy_from_user(copy + size, data + size, PAGE_SIZE - size)) + if (left == 0) + /* return not relevant, just silence the compiler */ + left = copy_from_user(copy + size, data + size, + PAGE_SIZE - size); + else memset(copy + size, 0, PAGE_SIZE - size); } return copy;