From patchwork Mon Nov 23 20:07:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11926611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E7AAC63697 for ; Mon, 23 Nov 2020 20:21:35 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DEA09206E5 for ; Mon, 23 Nov 2020 20:21:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="oNDYD3r2"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="QghR/2kE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DEA09206E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=OGyI335Pa2NW04baN0AoddS2FR+JBUMiqPuyulVBOjw=; b=oNDYD3r2X/TESlcFGeMsw1EjR OYcsVY3jQFwxicPczwuVi5eJTd710hNqbF23kGGOVZATmKalKioUnDCpgl7gTrbi8h8rRj4lxPrmh Q3sGJ3ySx/cfv+RFrhOh/36D41J2oDTkLvpEgYEXO3OT/FxG4XAC7bTuruZHdUcBll9pqzpUii/v/ GCnoJbrDv3SoEJUe/n6MqJz2hVJWy9NA8Sh4jDnAK+LXUqHK+Vp21Hq2JKDDwI7J3TynA/YSfTR1f bLbrq9/UFpxGTjHtDVcLmOnI+dK7mY3rgx0eUJnEOLMtDo8i8y69tebP+BdMVwbP3g/rjwuVpPZ89 tdaKwXFyw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1khIJD-00044d-JM; Mon, 23 Nov 2020 20:19:35 +0000 Received: from mail-wm1-f74.google.com ([209.85.128.74]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1khI9U-0008IA-Tl for linux-arm-kernel@lists.infradead.org; Mon, 23 Nov 2020 20:09:48 +0000 Received: by mail-wm1-f74.google.com with SMTP id u9so106573wmb.2 for ; Mon, 23 Nov 2020 12:09:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=utlsD1mw0rorOhWh+9V4hpftFWJXg/4VW/2wSqx9Lr0=; b=QghR/2kE9NUz+UPTow/wMUK0um+zciJOn8LYNh3m83vv2prNKjgOeCPk4o4OzDFfUk CDPF0kEnLqcTUfY37m+bkl3Ny3288a3tiAdC2F7W06efThDosRzBdZz+pDjVem5ZuSlR rNn1C6/CJZMBawDF1MijFLVam73dmNA76LXTZ2mIU3OnYtNLgumIatnfm8RqCCelv0Oa XizxZDb2eZZ3kifWxGXqU0uSdCzIrxzopGomlNemoCq5eGHS1cZOSiKyVc1sa5QO4Unr DUUK1pBv305/WqN+WZlppX+12kYK14wV05lTeq98GH2CAdlGedCLvLc8zUgflk+DNM3I jKkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=utlsD1mw0rorOhWh+9V4hpftFWJXg/4VW/2wSqx9Lr0=; b=JH/xd9f2ZbP760TK25ld45HXpnE8mukbGY/Q4M5HIqOlQSLouemqxdzMSIdNynG9zV 57l3InHwrxVAP8mWntESey2MGtBKn3JAJJ5haGRgQxvGaAHCFewTSIYt/qI6TWTGasFw UJym/UAuphFzzNm7z4UUemV6k6OG1eSQRnRrz4MAhQrK3ZbEMkhclcGNPL38/f/9qPdk vCLtSxpBDxawMwOFfkrEuSSBaEPF+vXMeK/OTVSwrvuLLoF/lD8q8e2rQeluo+a4VMHS oz9K9Gzpz5sT0HcPu0leD4tCY8Eo9J9ZzrtPEUBue8R+Gap8dWDqz43byTzVRoUYRA1A EZsw== X-Gm-Message-State: AOAM531exBGPNOgvkpyMf2uc3gD0sCQ8Rih4At8cZ1FGHRvOkvcEN5hN DF/jFRZt+mdM3M44GQnoDcNgCaOndhboOZmT X-Google-Smtp-Source: ABdhPJzgONaFTZU/h3JPqsgZz6R2sgr3ZA9iJYvpwZvYvUFgReYxzWPEWMFsJd+B8VTgu3nwC5HgaPBLZj2SDWUW X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a7b:c84a:: with SMTP id c10mr612311wml.44.1606162168044; Mon, 23 Nov 2020 12:09:28 -0800 (PST) Date: Mon, 23 Nov 2020 21:07:50 +0100 In-Reply-To: Message-Id: <9073d4e973747a6f78d5bdd7ebe17f290d087096.1606161801.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH mm v11 26/42] arm64: mte: Reset the page tag in page->flags From: Andrey Konovalov To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201123_150933_110849_9E33340D X-CRM114-Status: GOOD ( 19.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Catalin Marinas , Kevin Brodsky , Will Deacon , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Konovalov , Andrey Ryabinin , Vincenzo Frascino , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Vincenzo Frascino The hardware tag-based KASAN for compatibility with the other modes stores the tag associated to a page in page->flags. Due to this the kernel faults on access when it allocates a page with an initial tag and the user changes the tags. Reset the tag associated by the kernel to a page in all the meaningful places to prevent kernel faults on access. Note: An alternative to this approach could be to modify page_to_virt(). This though could end up being racy, in fact if a CPU checks the PG_mte_tagged bit and decides that the page is not tagged but another CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel access would fail. Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas --- Change-Id: I8451d438bb63364de2a3e68041e3a27866921d4e --- arch/arm64/kernel/hibernate.c | 5 +++++ arch/arm64/kernel/mte.c | 9 +++++++++ arch/arm64/mm/copypage.c | 9 +++++++++ arch/arm64/mm/mteswap.c | 9 +++++++++ 4 files changed, 32 insertions(+) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 42003774d261..9c9f47e9f7f4 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -371,6 +371,11 @@ static void swsusp_mte_restore_tags(void) unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); + /* + * It is not required to invoke page_kasan_tag_reset(page) + * at this point since the tags stored in page->flags are + * already restored. + */ mte_restore_page_tags(page_address(page), tags); mte_free_tag_storage(tags); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 8f99c65837fd..86d554ce98b6 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -34,6 +34,15 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) return; } + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_clear_page_tags(page_address(page)); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 70a71f38b6a9..b5447e53cd73 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -23,6 +23,15 @@ void copy_highpage(struct page *to, struct page *from) if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); + page_kasan_tag_reset(to); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_copy_page_tags(kto, kfrom); } } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index c52c1847079c..7c4ef56265ee 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_restore_page_tags(page_address(page), tags); return true;