From patchwork Tue Nov 10 22:10:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11895689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62DE2C4742C for ; Tue, 10 Nov 2020 22:24:18 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 07C54206B2 for ; Tue, 10 Nov 2020 22:24:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="1ZvZr6pa"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="q4QGAbbq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 07C54206B2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0THHEK1yJe9CN8jzGl2zxKvq8rl4I2RrpJ1Mv644adg=; b=1ZvZr6paAlCwYoCu9sm8MW4b1 mPZAobPH6dWp+yTTWh9WuhZiy/zYKLMUylQSyC0APySHGYxwl7mxBmDXfmmr1+N4xP+cJWHAqkP9R OQBm2KqcLXQ2VG/2/MirEvITs/KMgc/+iO+7WKab/SeXnIl3TlNNpEJ2C10/k48bju2ZfHH7mvyoY 6sUCxJ7QoFALsmMzaIQ+5J+IbkZ1f3ArMpV/nXQ9h4oaJyFJ6xHScPXmSofBBO+mH1Y5NaDryGIuN hesaF95qvSM++8wnfUarv5BBz43+/Y3oslPkAFZ2M38ennkzHw05Kjzc1IYuvh6+6qCTNrScxT2yP KoR7inajg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kcc3C-0001U5-Az; Tue, 10 Nov 2020 22:23:42 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kcbsA-0004se-PJ for linux-arm-kernel@lists.infradead.org; Tue, 10 Nov 2020 22:12:35 +0000 Received: by mail-qv1-xf49.google.com with SMTP id y8so36364qvu.22 for ; Tue, 10 Nov 2020 14:12:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=d8tQjB/ZefUWtBLBKefANqT3i+lXlksl44G7nQ4ChAg=; b=q4QGAbbqFTvG6svSX1Y0b5DmMlf+NnGY0eiXeETEiNsT5IhVUxrzZAWozr85jU/KJq Hw3qUNVXht63Cl41GMwM4nSEWQMgQIQKPZKH+pp7BlVDHA96GIfThT03HMXtZLqQ3aF1 +MLrQS4e0kX0JqCO4hu39P1wLKBuhSzRmmjtvRei8lAyxDUlaNXXlSf37xwv4ffbzJL/ 3tKcthjfCuVOwupQZvFHoFApZNVPK8ph6PdYhGn/hu9VaXKWHL81G+0/7Rhcl0iYOlpi sVNuMjpbnNwR2c2H3OZVgTqeZqu+3wgsO5ID4aqoaRZAOztUipZebXL4aQL405F+j7Og PuDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=d8tQjB/ZefUWtBLBKefANqT3i+lXlksl44G7nQ4ChAg=; b=hDBQy3B1WMxtCx9X576PL+OpI82Q4Zhf0dZQkblZDBuG9DkZvPA7F+NDeYNGsesSTQ /sOo158cV3BcWEjdPqZ7X8JyMzHov7ZEvHkVDFazTCAD7wx9Mo3BiknLtYVt8GHYvLcD hNYOsV/CXu0f+tXJDTwJpPCI9Bw8OcmzZdhnBGo4RBLXmukyDAO9vbeX93NdQd2D/3Lf jfOzU1BUKGZJKCMRF9cbdkfoQUU2nMeu3yPZ3IMPCg3YVcToeB+aaxvok810g/B3q6GQ sWpzEhpAOzc9psg7b/FVnQImF4fbPCoO5TjdB3dlVHoa1kKZtYN4W/2u0SL23fA6om+1 KcZg== X-Gm-Message-State: AOAM533HUHTj003Pv65oGCmAq1kBaLEpXjdF/GoeSTMeXgxC1SQgCePj 4fk6oL57W46WDnbV9qIiZ/2qpJiwWLTEKwx+ X-Google-Smtp-Source: ABdhPJzqUKiPBTpkp/5b4P8WdJ74FMvyA0HFkNkdi7ue58Cvjh4Qv1e8HBk18wTn5vXmofT8hPC6q7lrIrvVjhHc X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a0c:f951:: with SMTP id i17mr7499877qvo.22.1605046333870; Tue, 10 Nov 2020 14:12:13 -0800 (PST) Date: Tue, 10 Nov 2020 23:10:25 +0100 In-Reply-To: Message-Id: <4a7819f8942922451e8075d7003f7df357919dfc.1605046192.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.222.g5d2a92d10f8-goog Subject: [PATCH v9 28/44] arm64: mte: Reset the page tag in page->flags From: Andrey Konovalov To: Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201110_171218_945014_7376840C X-CRM114-Status: GOOD ( 18.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Andrey Konovalov , Kevin Brodsky , Will Deacon , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Ryabinin , Andrew Morton , Vincenzo Frascino , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Vincenzo Frascino The hardware tag-based KASAN for compatibility with the other modes stores the tag associated to a page in page->flags. Due to this the kernel faults on access when it allocates a page with an initial tag and the user changes the tags. Reset the tag associated by the kernel to a page in all the meaningful places to prevent kernel faults on access. Note: An alternative to this approach could be to modify page_to_virt(). This though could end up being racy, in fact if a CPU checks the PG_mte_tagged bit and decides that the page is not tagged but another CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel access would fail. Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov --- Change-Id: I8451d438bb63364de2a3e68041e3a27866921d4e --- arch/arm64/kernel/hibernate.c | 5 +++++ arch/arm64/kernel/mte.c | 9 +++++++++ arch/arm64/mm/copypage.c | 1 + arch/arm64/mm/mteswap.c | 9 +++++++++ 4 files changed, 24 insertions(+) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 42003774d261..9c9f47e9f7f4 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -371,6 +371,11 @@ static void swsusp_mte_restore_tags(void) unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); + /* + * It is not required to invoke page_kasan_tag_reset(page) + * at this point since the tags stored in page->flags are + * already restored. + */ mte_restore_page_tags(page_address(page), tags); mte_free_tag_storage(tags); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 8f99c65837fd..600b26d65b41 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -34,6 +34,15 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) return; } + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_clear_page_tags(page_address(page)); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 70a71f38b6a9..f0efa4847e2f 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -23,6 +23,7 @@ void copy_highpage(struct page *to, struct page *from) if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); + page_kasan_tag_reset(to); mte_copy_page_tags(kto, kfrom); } } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index c52c1847079c..9cc59696489c 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_restore_page_tags(page_address(page), tags); return true;