From patchwork Mon Nov 23 20:07:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11926613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C03AC2D0E4 for ; Mon, 23 Nov 2020 20:22:05 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B7E81206E5 for ; Mon, 23 Nov 2020 20:22:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="LVxtCdRs"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Bnb8vGsj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B7E81206E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=1N9ajVgWqrAYIbuUT9siPoZPoylK35kwUWUnyId92Zo=; b=LVxtCdRsYTwbagZuC/NMEnIxg 8+dDDZuPwIchZ99eggXoxj8QCkajwlQlmer8jjMvZTPRDuz0gNJ68r3XcOJ+Ko/ojweLkKbmxB0kE G0jMzb5/2mU95qabw14rGzZVxBddO2V57+SunpL4Ej8WfAyvcCFz9dg9j/EGKH2Zgtno03fwFeVN3 rX7Q4FrDJ4AfGfGcHdAVVYXvSz3IlWEgS1UAKENY50f8ckqFmCX7dUVmDEiG501PnZtNyyPt4CTp0 mMY5Glu3AqHZWOp8vlZsGWRbwNOah8+b6/N1HoeR0meffJ0hSs0NnxGd7h2u/1iQuJlLF0iRJGL5q Di06hJ50g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1khIJj-0004Ka-Ug; Mon, 23 Nov 2020 20:20:07 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1khI9X-0008JB-He for linux-arm-kernel@lists.infradead.org; Mon, 23 Nov 2020 20:09:53 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id z83so15433368ybz.2 for ; Mon, 23 Nov 2020 12:09:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=vnZk23A1hkByjQlL/Jowfu6pOB5axCxl0nNATXG6I/s=; b=Bnb8vGsj+uBu1NiG43YXYtRo6IHMEtrhfCmhJayDtfNIiETn/hQ8oa2GfXRlUmlrF9 ZLJPF3ogHC2tjeqnPwyw+i6EfPpO/U62ZfbQRuSs2+Pvd56xo4k+ay4knVPc4gShW9uU tu9KogljqxWykOyfzCSFsKjtXGTsekcmJvs4xCpA1brQW86d44IpNBgDHPK5vLHND9pY OBj3BCewboWLvj33rcb+/cY8wx2TFgtXPa+zmg+gey1Grkk5FCMJ7DYWBWoPex5dpIW6 AMt7faOfqlo6QTxu2b2yPnZu+NcfElTRxXYYuHjIVk5GXrGfN8JYuUBGPGsQAsD2KoHH me+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vnZk23A1hkByjQlL/Jowfu6pOB5axCxl0nNATXG6I/s=; b=MvT7me5qCqvauCv2NQPz98dQE2tRFnYyX/Ft9UNG9SnL0Jc9+UvcYZZXA737s6ejWJ rILeu46msa4sghSWelxco7x1RNuJ+gcVvOIq5dhZwz8EeeScnkY/TXgcU5c12ZT269De OClExmpuw7kgJwgVXgQp4isxc4XO61QcCAH1JgiqnYrdxDDw5KUOIgZpCosj5SCBHGF9 pm4HB/Y43uPtaq0MtmGh+lK1ZUpxTlz/DlnEQiB0iAi8qgbZJWag2CU0oc4ADBypLpYJ vVoIeuHZosFfyLayV/5043jvg4+BPPuASNqcElXUq6iuko/6Wb+3R4E+AdzPifkO0frG nfCA== X-Gm-Message-State: AOAM532r39x4T+rG5JTrAhW+IPSEkfp9srBg/nsJmOkHYTtkvRjylzRD FtAt07M+papvH1IAO/H4+/k7aM+yh2R2a1RY X-Google-Smtp-Source: ABdhPJxiE0/zul085LjGuj2Tqye0cfqi7NIjzwx4wpof9kbF3apSnDQiQg+qv1iIvnd12NahDQi7cKz3gMiYck+5 X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a25:786:: with SMTP id 128mr1448326ybh.19.1606162170465; Mon, 23 Nov 2020 12:09:30 -0800 (PST) Date: Mon, 23 Nov 2020 21:07:51 +0100 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH mm v11 27/42] arm64: mte: Add in-kernel tag fault handler From: Andrey Konovalov To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201123_150935_658622_235F5B2C X-CRM114-Status: GOOD ( 23.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Catalin Marinas , Kevin Brodsky , Will Deacon , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Konovalov , Andrey Ryabinin , Vincenzo Frascino , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Vincenzo Frascino Add the implementation of the in-kernel fault handler. When a tag fault happens on a kernel address: * MTE is disabled on the current CPU, * the execution continues. When a tag fault happens on a user address: * the kernel executes do_bad_area() and panics. The tag fault handler for kernel addresses is currently empty and will be filled in by a future commit. Signed-off-by: Vincenzo Frascino Co-developed-by: Andrey Konovalov Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Signed-off-by: Catalin Marinas Reviewed-by: Vincenzo Frascino --- Change-Id: I9b8aa79567f7c45f4d6a1290efcf34567e620717 --- arch/arm64/include/asm/uaccess.h | 23 ++++++++++++++++ arch/arm64/mm/fault.c | 45 ++++++++++++++++++++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 385a189f7d39..d841a560fae7 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -200,13 +200,36 @@ do { \ CONFIG_ARM64_PAN)); \ } while (0) +/* + * The Tag Check Flag (TCF) mode for MTE is per EL, hence TCF0 + * affects EL0 and TCF affects EL1 irrespective of which TTBR is + * used. + * The kernel accesses TTBR0 usually with LDTR/STTR instructions + * when UAO is available, so these would act as EL0 accesses using + * TCF0. + * However futex.h code uses exclusives which would be executed as + * EL1, this can potentially cause a tag check fault even if the + * user disables TCF0. + * + * To address the problem we set the PSTATE.TCO bit in uaccess_enable() + * and reset it in uaccess_disable(). + * + * The Tag check override (TCO) bit disables temporarily the tag checking + * preventing the issue. + */ static inline void uaccess_disable(void) { + asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(0), + ARM64_MTE, CONFIG_KASAN_HW_TAGS)); + __uaccess_disable(ARM64_HAS_PAN); } static inline void uaccess_enable(void) { + asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(1), + ARM64_MTE, CONFIG_KASAN_HW_TAGS)); + __uaccess_enable(ARM64_HAS_PAN); } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 183d1e6dd9e0..1e4b9353c68a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -297,6 +298,44 @@ static void die_kernel_fault(const char *msg, unsigned long addr, do_exit(SIGKILL); } +static void report_tag_fault(unsigned long addr, unsigned int esr, + struct pt_regs *regs) +{ +} + +static void do_tag_recovery(unsigned long addr, unsigned int esr, + struct pt_regs *regs) +{ + static bool reported; + + if (!READ_ONCE(reported)) { + report_tag_fault(addr, esr, regs); + WRITE_ONCE(reported, true); + } + + /* + * Disable MTE Tag Checking on the local CPU for the current EL. + * It will be done lazily on the other CPUs when they will hit a + * tag fault. + */ + sysreg_clear_set(sctlr_el1, SCTLR_ELx_TCF_MASK, SCTLR_ELx_TCF_NONE); + isb(); +} + +static bool is_el1_mte_sync_tag_check_fault(unsigned int esr) +{ + unsigned int ec = ESR_ELx_EC(esr); + unsigned int fsc = esr & ESR_ELx_FSC; + + if (ec != ESR_ELx_EC_DABT_CUR) + return false; + + if (fsc == ESR_ELx_FSC_MTE) + return true; + + return false; +} + static void __do_kernel_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs) { @@ -313,6 +352,12 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, "Ignoring spurious kernel translation fault at virtual address %016lx\n", addr)) return; + if (is_el1_mte_sync_tag_check_fault(esr)) { + do_tag_recovery(addr, esr, regs); + + return; + } + if (is_el1_permission_fault(addr, esr, regs)) { if (esr & ESR_ELx_WNR) msg = "write to read-only memory";