From patchwork Wed Jun 12 11:43:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10989651 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35FF713AF for ; Wed, 12 Jun 2019 11:45:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 21D59286C6 for ; Wed, 12 Jun 2019 11:45:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 116D3286C4; Wed, 12 Jun 2019 11:45:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71072286C6 for ; Wed, 12 Jun 2019 11:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438454AbfFLLpT (ORCPT ); Wed, 12 Jun 2019 07:45:19 -0400 Received: from mail-yw1-f73.google.com ([209.85.161.73]:55811 "EHLO mail-yw1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2438386AbfFLLnu (ORCPT ); Wed, 12 Jun 2019 07:43:50 -0400 Received: by mail-yw1-f73.google.com with SMTP id b129so16929673ywe.22 for ; Wed, 12 Jun 2019 04:43:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9yc1Yq7FenUh0AO3dUlc6AUSMFMskh2/7LbKPot9xGI=; b=jG6r3qKXi5/zKGkjzMFvbQ4XcPl+/QmcSKvFp3utSJtxAUleazRXkJRGwBNNuJ7i7E W7vyyFyjCHej12ys6727AQE+zBEoRn7fuGQ0zPXkT7W8rJNSD33sWm4M+L2uLQ9Yr38G Tk/yJfeEOxvIPA7bqvovIzoag6j0IkNGtc29ARK/7GdZnbkyHKT1QM3wgM9+VOErwGhl DOZEsTVUoN8+b79rOajf7KnGDkH2u/eCR/G6bd5zo2eRqtSAit8bn6BlCrH2lRN5NzMd //lJnKytaH0FthsUWMOhqc1e/AQcdKwVvubR2bol5kkiAS3ABJ/10iV11C/N1mocmKiy lcUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9yc1Yq7FenUh0AO3dUlc6AUSMFMskh2/7LbKPot9xGI=; b=JnCVpJ7/JW2Y4nywQeWgtYB/Rh5dEeoqpULIC76SxIC6xAePRCD2gQa4EefSVLYVRf j6PuGyWuw51y4Dz800mI2VGkw7G80M+da+4NgLV2locj+yAzgSVz9HAUjnSZQUVwPPLP rJHfN3IjbobLyJTul7qJ55Du+6saf9veyKj8VDI4OgVm92Q4ai9QNaQeNKCBKb4c0YhA XIcmFBPlxTLfCAJwkF897zs4DuARLVv9u7vC7BlvVCr7dcnnyVUyFXB5jQgenIkpz4eF 1R951/Vr4XaNDxkiP3dgDPqNgIUUsATLNdKzFn/BCGbskrNG+VkOqWOC+SbsaoWxEZ8G zibA== X-Gm-Message-State: APjAAAXnjhxaj86k/HqtEpcJ1Zg3eNxsc5tiT/76NZ8fOIXVoMpMJEwi LfrXlwp3/6WUFzp9CiFxDc8OuS8bEF1Q7I++ X-Google-Smtp-Source: APXvYqzWkT0P+p3RG6pP2xT18iDm8BLgmUWrizOHYeFPCBqdgbB+Dc65X4gpAkgZlAg9PVOXah90gk97CBM2CYKI X-Received: by 2002:a81:2545:: with SMTP id l66mr16760176ywl.489.1560339829646; Wed, 12 Jun 2019 04:43:49 -0700 (PDT) Date: Wed, 12 Jun 2019 13:43:21 +0200 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.22.0.rc2.383.gf4fbbf30c2-goog Subject: [PATCH v17 04/15] mm, arm64: untag user pointers passed to memory syscalls From: Andrey Konovalov To: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Catalin Marinas , Vincenzo Frascino , Will Deacon , Mark Rutland , Andrew Morton , Greg Kroah-Hartman , Kees Cook , Yishai Hadas , Felix Kuehling , Alexander Deucher , Christian Koenig , Mauro Carvalho Chehab , Jens Wiklander , Alex Williamson , Leon Romanovsky , Luc Van Oostenryck , Dave Martin , Khalid Aziz , enh , Jason Gunthorpe , Christoph Hellwig , Dmitry Vyukov , Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Robin Murphy , Kevin Brodsky , Szabolcs Nagy , Andrey Konovalov Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments. This patch allows tagged pointers to be passed to the following memory syscalls: get_mempolicy, madvise, mbind, mincore, mlock, mlock2, mprotect, mremap, msync, munlock, move_pages. The mmap and mremap syscalls do not currently accept tagged addresses. Architectures may interpret the tag as a background colour for the corresponding vma. Reviewed-by: Catalin Marinas Reviewed-by: Kees Cook Signed-off-by: Andrey Konovalov Reviewed-by: Vincenzo Frascino Reviewed-by: Khalid Aziz --- mm/madvise.c | 2 ++ mm/mempolicy.c | 3 +++ mm/migrate.c | 2 +- mm/mincore.c | 2 ++ mm/mlock.c | 4 ++++ mm/mprotect.c | 2 ++ mm/mremap.c | 7 +++++++ mm/msync.c | 2 ++ 8 files changed, 23 insertions(+), 1 deletion(-) diff --git a/mm/madvise.c b/mm/madvise.c index 628022e674a7..39b82f8a698f 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -810,6 +810,8 @@ SYSCALL_DEFINE3(madvise, unsigned long, start, size_t, len_in, int, behavior) size_t len; struct blk_plug plug; + start = untagged_addr(start); + if (!madvise_behavior_valid(behavior)) return error; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 01600d80ae01..78e0a88b2680 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1360,6 +1360,7 @@ static long kernel_mbind(unsigned long start, unsigned long len, int err; unsigned short mode_flags; + start = untagged_addr(start); mode_flags = mode & MPOL_MODE_FLAGS; mode &= ~MPOL_MODE_FLAGS; if (mode >= MPOL_MAX) @@ -1517,6 +1518,8 @@ static int kernel_get_mempolicy(int __user *policy, int uninitialized_var(pval); nodemask_t nodes; + addr = untagged_addr(addr); + if (nmask != NULL && maxnode < nr_node_ids) return -EINVAL; diff --git a/mm/migrate.c b/mm/migrate.c index f2ecc2855a12..d22c45cf36b2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1616,7 +1616,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, goto out_flush; if (get_user(node, nodes + i)) goto out_flush; - addr = (unsigned long)p; + addr = (unsigned long)untagged_addr(p); err = -ENODEV; if (node < 0 || node >= MAX_NUMNODES) diff --git a/mm/mincore.c b/mm/mincore.c index c3f058bd0faf..64c322ed845c 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -249,6 +249,8 @@ SYSCALL_DEFINE3(mincore, unsigned long, start, size_t, len, unsigned long pages; unsigned char *tmp; + start = untagged_addr(start); + /* Check the start address: needs to be page-aligned.. */ if (start & ~PAGE_MASK) return -EINVAL; diff --git a/mm/mlock.c b/mm/mlock.c index 080f3b36415b..e82609eaa428 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -674,6 +674,8 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla unsigned long lock_limit; int error = -ENOMEM; + start = untagged_addr(start); + if (!can_do_mlock()) return -EPERM; @@ -735,6 +737,8 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len) { int ret; + start = untagged_addr(start); + len = PAGE_ALIGN(len + (offset_in_page(start))); start &= PAGE_MASK; diff --git a/mm/mprotect.c b/mm/mprotect.c index bf38dfbbb4b4..19f981b733bc 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -465,6 +465,8 @@ static int do_mprotect_pkey(unsigned long start, size_t len, const bool rier = (current->personality & READ_IMPLIES_EXEC) && (prot & PROT_READ); + start = untagged_addr(start); + prot &= ~(PROT_GROWSDOWN|PROT_GROWSUP); if (grows == (PROT_GROWSDOWN|PROT_GROWSUP)) /* can't be both */ return -EINVAL; diff --git a/mm/mremap.c b/mm/mremap.c index fc241d23cd97..64c9a3b8be0a 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -606,6 +606,13 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, LIST_HEAD(uf_unmap_early); LIST_HEAD(uf_unmap); + /* + * Architectures may interpret the tag passed to mmap as a background + * colour for the corresponding vma. For mremap we don't allow tagged + * new_addr to preserve similar behaviour to mmap. + */ + addr = untagged_addr(addr); + if (flags & ~(MREMAP_FIXED | MREMAP_MAYMOVE)) return ret; diff --git a/mm/msync.c b/mm/msync.c index ef30a429623a..c3bd3e75f687 100644 --- a/mm/msync.c +++ b/mm/msync.c @@ -37,6 +37,8 @@ SYSCALL_DEFINE3(msync, unsigned long, start, size_t, len, int, flags) int unmapped_error = 0; int error = -EINVAL; + start = untagged_addr(start); + if (flags & ~(MS_ASYNC | MS_INVALIDATE | MS_SYNC)) goto out; if (offset_in_page(start))