From patchwork Fri Jul 3 15:37:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 11642141 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CE5914E3 for ; Fri, 3 Jul 2020 15:43:23 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D8BC620899 for ; Fri, 3 Jul 2020 15:43:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="NVGVFk9l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8BC620899 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Q5gnnz5ecfDIa91rbpoH4V5er5TSqrehwxuFzrnKHEY=; b=NVGVFk9lOW2Pfmjtvbf9I4lAl anStuwo6Gc7YPA85u8LuZbzcg2v9DjTtqcIQnMaqOcA5PYtY5V/ZIvEL1Nj5O6iHPx1y6P/U/7huD hKpO3o89t8HoZb1tOvIccRuSlSFrB1DCQ1g4eRu6JcxBy+AZE36j0hwQaN8AKGEJx4tkncYcy8RC/ PechE19hyrnHQNyLNVm5qjldxsMRJ+6296itMdlmbWf8CjPiYY1EK8b75kS0lQ4GKdLyXi9vjQJV9 fHFJm3Ff2Y7L/zFOiTkcKWwrZcdSeA0GHGUn8aeJ/KwZ8aldskvRRHYykKN1Dfn2VYZ+LDPUq76M+ K7ffuf+uQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jrNoV-0003TE-Bq; Fri, 03 Jul 2020 15:41:19 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jrNlV-0001R0-7C for linux-arm-kernel@lists.infradead.org; Fri, 03 Jul 2020 15:38:15 +0000 Received: from localhost.localdomain (unknown [95.146.230.158]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8F272215A4; Fri, 3 Jul 2020 15:38:10 +0000 (UTC) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 21/26] fs: Handle intra-page faults in copy_mount_options() Date: Fri, 3 Jul 2020 16:37:13 +0100 Message-Id: <20200703153718.16973-22-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200703153718.16973-1-catalin.marinas@arm.com> References: <20200703153718.16973-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200703_113813_859638_4FAAFA3E X-CRM114-Status: GOOD ( 18.83 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at https://www.dnswl.org/, high trust [198.145.29.99 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 HEADER_FROM_DIFFERENT_DOMAINS From and EnvelopeFrom 2nd level mail domains are different 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Szabolcs Nagy , Andrey Konovalov , Kevin Brodsky , Peter Collingbourne , linux-mm@kvack.org, Alexander Viro , Andrew Morton , Vincenzo Frascino , Will Deacon , Dave P Martin Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The copy_mount_options() function takes a user pointer argument but no size and it tries to read up to a PAGE_SIZE. However, copy_from_user() is not guaranteed to return all the accessible bytes if, for example, the access crosses a page boundary and gets a fault on the second page. To work around this, the current copy_mount_options() implementation performs two copy_from_user() passes, first to the end of the current page and the second to what's left in the subsequent page. On arm64 with MTE enabled, access to a user page may trigger a fault after part of the buffer in a page has been copied (when the user pointer tag, bits 56-59, no longer matches the allocation tag stored in memory). Allow copy_mount_options() to handle such intra-page faults by resorting to byte at a time copy in case of copy_from_user() failure. Note that copy_from_user() handles the zeroing of the kernel buffer in case of error. Signed-off-by: Catalin Marinas Cc: Alexander Viro --- Notes: v6: - Simplified logic to fall-back to byte-by-byte if the copy_from_user() fails. v4: - Rewrite to avoid arch_has_exact_copy_from_user() New in v3. fs/namespace.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/fs/namespace.c b/fs/namespace.c index f30ed401cc6d..163ac986dc2d 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -3074,7 +3074,7 @@ static void shrink_submounts(struct mount *mnt) void *copy_mount_options(const void __user * data) { char *copy; - unsigned size; + unsigned left, offset; if (!data) return NULL; @@ -3083,16 +3083,27 @@ void *copy_mount_options(const void __user * data) if (!copy) return ERR_PTR(-ENOMEM); - size = PAGE_SIZE - offset_in_page(data); + left = copy_from_user(copy, data, PAGE_SIZE); - if (copy_from_user(copy, data, size)) { + /* + * Not all architectures have an exact copy_form_user(). Resort to + * byte at a time. + */ + offset = PAGE_SIZE - left; + while (left) { + char c; + if (get_user(c, (const char __user *)data + offset)) + break; + copy[offset] = c; + left--; + offset++; + } + + if (left == PAGE_SIZE) { kfree(copy); return ERR_PTR(-EFAULT); } - if (size != PAGE_SIZE) { - if (copy_from_user(copy + size, data + size, PAGE_SIZE - size)) - memset(copy + size, 0, PAGE_SIZE - size); - } + return copy; }