From patchwork Fri Jun 8 23:51:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10455371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C450760234 for ; Sat, 9 Jun 2018 00:01:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C044B29486 for ; Sat, 9 Jun 2018 00:01:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B452429505; Sat, 9 Jun 2018 00:01:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E63C294A1 for ; Sat, 9 Jun 2018 00:01:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B8EE6B026F; Fri, 8 Jun 2018 20:01:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 141CA6B0270; Fri, 8 Jun 2018 20:01:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00C6B6B0271; Fri, 8 Jun 2018 20:01:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id A45826B026F for ; Fri, 8 Jun 2018 20:01:02 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id x25-v6so6943904pfn.21 for ; Fri, 08 Jun 2018 17:01:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=tEhJnSlngrfcPhI7j8+MoJbXiVahC3eU0f8xDgKITkY=; b=H+oyu6SIq3+bcCpp/VWYYFRy+qXGTEXySm6dTH+M9jGxP/h3wZ0FUwKnJpIwWQSZJt Y1t+rgr/zgHOKuAvnf/Jsl/eb5dXeY6pVBhBfp1epPDIma5SnOHji+5Kh0X3qDo3zP/J kt1IbUdJmnos/uBAxJwGrMpVhBL2RDFdFVDGGjYem2vLu7nVmdvv6CapWFJ1aPobdTk2 ZR3dlWmVc4UOEL+CzUz10HSOj8/YdzpsMsXFc51pTc5ai6V1jWfPRMB6VPKw/I50FUej 6L0pGbgjF2lxPqm/nwkcbNJZ+LYwMf6bljwcqptfNSPxCMYe1svq1tag9l0GBrKk7/7p rWMw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APt69E2gC8XZb9+U4xVff2lJgBeGcY2Pt/PzIpBRGqlPPK2fa5zg0ZmA x0dIxd0GeSuGIY6yrvN5y6FJGtYICIoHJvpR4eZzKhFPUR9v/7kmK4UGzir8bZF787a0IBwbW0E A/QJhWS0Ip/+8QplLDZ+N6px6VsV5KM8NLvpMksUiYRCLUSq/Sok7ZPjiOQuweyFSCQ== X-Received: by 2002:a17:902:43a4:: with SMTP id j33-v6mr8741087pld.118.1528502462328; Fri, 08 Jun 2018 17:01:02 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJ5Dyb6tMJ8Aktb+RURN5RtbsXEKheSy2TP5HJE9q3UNGPUTCXE0BNNE1wgNMtpoJfg8qxj X-Received: by 2002:a17:902:43a4:: with SMTP id j33-v6mr8741025pld.118.1528502461432; Fri, 08 Jun 2018 17:01:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528502461; cv=none; d=google.com; s=arc-20160816; b=GARSrVPSgs34DgGhgBFF3cB2RPzBQ3QtLAjdMq6T6j5CbSpYumw1VlZRWFserZlKPg XAaKyVF8dRcJn0bFCWhSS/EKCfPuoYsOBbZNXS5wy4RgWSJ3gRXU3P/qKBLsc0lZ3v6K mUysRgQVT3FsfY9yKV4v2qYl8eT6VgGUsunVHbJWaAZfhQKPo+8EzKgmummKGNJqXq9A F2iatOsFDhyDnsyL9HMfSilS7qVJrJThuY3rBppT6Q7IbhuixPH8Su+zeBs+hMVyJaqA iXCtivzAA3MCfyhyUflr7Qw3ZgijhO1c4vRrhzoWkh7wnR2Mjbpvml2Me7879d4cPmGa EvPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject :arc-authentication-results; bh=tEhJnSlngrfcPhI7j8+MoJbXiVahC3eU0f8xDgKITkY=; b=l+jARyAYT3obJGhvlJ3WODfjgDfXSi6fZKHr52Jvn5zTGYniSoAwCv30j6jC6vuewu 522e7XhieQ6xvtFVbVCR6A6aGeOz3Qeq9CWFDwsvvKYaXhnZjupayn9cUM/8mVDJt6A3 e3fD09wdy8bhEQNuefGd+xI0rBzYmtHqTj0gvimjEw44ON8tSee6Js4VyE6lP4BHv4dJ eZ8QID9zq4vfb+Hu1Ap0+MKSHIVIM6qvEAGSuwJ6GeaSVlq0+sg3/3AWaK+xLjRkyaII /t5Y4nJwufTNC3yooBfw/TaR1IjlsDkScWvtEcXIjfXt2BLF0gV9knNdOplRLNVPUS0p Aetw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga06.intel.com (mga06.intel.com. [134.134.136.31]) by mx.google.com with ESMTPS id a23-v6si8043211plm.305.2018.06.08.17.01.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 Jun 2018 17:01:01 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) client-ip=134.134.136.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Jun 2018 17:01:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,492,1520924400"; d="scan'208";a="55903937" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by FMSMGA003.fm.intel.com with ESMTP; 08 Jun 2018 17:01:00 -0700 Subject: [PATCH v4 08/12] x86/memory_failure: Introduce {set, clear}_mce_nospec() From: Dan Williams To: linux-nvdimm@lists.01.org Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Borislav Petkov , linux-edac@vger.kernel.org, x86@kernel.org, Tony Luck , hch@lst.de, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, jack@suse.cz Date: Fri, 08 Jun 2018 16:51:03 -0700 Message-ID: <152850186345.38390.5993995809392775033.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <152850182079.38390.8280340535691965744.stgit@dwillia2-desk3.amr.corp.intel.com> References: <152850182079.38390.8280340535691965744.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently memory_failure() returns zero if the error was handled. On that result mce_unmap_kpfn() is called to zap the page out of the kernel linear mapping to prevent speculative fetches of potentially poisoned memory. However, in the case of dax mapped devmap pages the page may be in active permanent use by the device driver, so it cannot be unmapped from the kernel. Instead of marking the page not present, marking the page UC should be sufficient for preventing poison from being pre-fetched into the cache. Convert mce_unmap_pfn() to set_mce_nospec() remapping the page as UC, to hide it from speculative accesses. Given that that persistent memory errors can be cleared by the driver, include a facility to restore the page to cacheable operation, clear_mce_nospec(). Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: Borislav Petkov Cc: Cc: Acked-by: Tony Luck Signed-off-by: Dan Williams --- arch/x86/include/asm/set_memory.h | 42 +++++++++++++++++++++++++++++ arch/x86/kernel/cpu/mcheck/mce-internal.h | 15 ---------- arch/x86/kernel/cpu/mcheck/mce.c | 38 ++------------------------ include/linux/set_memory.h | 14 ++++++++++ 4 files changed, 59 insertions(+), 50 deletions(-) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index bd090367236c..cf5e9124b45e 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -88,4 +88,46 @@ extern int kernel_set_to_readonly; void set_kernel_text_rw(void); void set_kernel_text_ro(void); +#ifdef CONFIG_X86_64 +static inline int set_mce_nospec(unsigned long pfn) +{ + unsigned long decoy_addr; + int rc; + + /* + * Mark the linear address as UC to make sure we don't log more + * errors because of speculative access to the page. + * We would like to just call: + * set_memory_uc((unsigned long)pfn_to_kaddr(pfn), 1); + * but doing that would radically increase the odds of a + * speculative access to the poison page because we'd have + * the virtual address of the kernel 1:1 mapping sitting + * around in registers. + * Instead we get tricky. We create a non-canonical address + * that looks just like the one we want, but has bit 63 flipped. + * This relies on set_memory_uc() properly sanitizing any __pa() + * results with __PHYSICAL_MASK or PTE_PFN_MASK. + */ + decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63)); + + rc = set_memory_uc(decoy_addr, 1); + if (rc) + pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn); + return rc; +} +#define set_mce_nospec set_mce_nospec + +/* Restore full speculative operation to the pfn. */ +static inline int clear_mce_nospec(unsigned long pfn) +{ + return set_memory_wb((unsigned long) pfn_to_kaddr(pfn), 1); +} +#define clear_mce_nospec clear_mce_nospec +#else +/* + * Few people would run a 32-bit kernel on a machine that supports + * recoverable errors because they have too much memory to boot 32-bit. + */ +#endif + #endif /* _ASM_X86_SET_MEMORY_H */ diff --git a/arch/x86/kernel/cpu/mcheck/mce-internal.h b/arch/x86/kernel/cpu/mcheck/mce-internal.h index 374d1aa66952..ceb67cd5918f 100644 --- a/arch/x86/kernel/cpu/mcheck/mce-internal.h +++ b/arch/x86/kernel/cpu/mcheck/mce-internal.h @@ -113,21 +113,6 @@ static inline void mce_register_injector_chain(struct notifier_block *nb) { } static inline void mce_unregister_injector_chain(struct notifier_block *nb) { } #endif -#ifndef CONFIG_X86_64 -/* - * On 32-bit systems it would be difficult to safely unmap a poison page - * from the kernel 1:1 map because there are no non-canonical addresses that - * we can use to refer to the address without risking a speculative access. - * However, this isn't much of an issue because: - * 1) Few unmappable pages are in the 1:1 map. Most are in HIGHMEM which - * are only mapped into the kernel as needed - * 2) Few people would run a 32-bit kernel on a machine that supports - * recoverable errors because they have too much memory to boot 32-bit. - */ -static inline void mce_unmap_kpfn(unsigned long pfn) {} -#define mce_unmap_kpfn mce_unmap_kpfn -#endif - struct mca_config { bool dont_log_ce; bool cmci_disabled; diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c index 42cf2880d0ed..a0fbf0a8b7e6 100644 --- a/arch/x86/kernel/cpu/mcheck/mce.c +++ b/arch/x86/kernel/cpu/mcheck/mce.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include @@ -50,7 +51,6 @@ #include #include #include -#include #include "mce-internal.h" @@ -108,10 +108,6 @@ static struct irq_work mce_irq_work; static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs); -#ifndef mce_unmap_kpfn -static void mce_unmap_kpfn(unsigned long pfn); -#endif - /* * CPU/chipset specific EDAC code can register a notifier call here to print * MCE errors in a human-readable form. @@ -602,7 +598,7 @@ static int srao_decode_notifier(struct notifier_block *nb, unsigned long val, if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) { pfn = mce->addr >> PAGE_SHIFT; if (!memory_failure(pfn, 0)) - mce_unmap_kpfn(pfn); + set_mce_nospec(pfn); } return NOTIFY_OK; @@ -1070,38 +1066,10 @@ static int do_memory_failure(struct mce *m) if (ret) pr_err("Memory error not recovered"); else - mce_unmap_kpfn(m->addr >> PAGE_SHIFT); + set_mce_nospec(m->addr >> PAGE_SHIFT); return ret; } -#ifndef mce_unmap_kpfn -static void mce_unmap_kpfn(unsigned long pfn) -{ - unsigned long decoy_addr; - - /* - * Unmap this page from the kernel 1:1 mappings to make sure - * we don't log more errors because of speculative access to - * the page. - * We would like to just call: - * set_memory_np((unsigned long)pfn_to_kaddr(pfn), 1); - * but doing that would radically increase the odds of a - * speculative access to the poison page because we'd have - * the virtual address of the kernel 1:1 mapping sitting - * around in registers. - * Instead we get tricky. We create a non-canonical address - * that looks just like the one we want, but has bit 63 flipped. - * This relies on set_memory_np() not checking whether we passed - * a legal address. - */ - - decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63)); - - if (set_memory_np(decoy_addr, 1)) - pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn); -} -#endif - /* * The actual machine check handler. This only handles real * exceptions when something got corrupted coming in through int 18. diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index da5178216da5..2a986d282a97 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -17,6 +17,20 @@ static inline int set_memory_x(unsigned long addr, int numpages) { return 0; } static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } #endif +#ifndef set_mce_nospec +static inline int set_mce_nospec(unsigned long pfn) +{ + return 0; +} +#endif + +#ifndef clear_mce_nospec +static inline int clear_mce_nospec(unsigned long pfn) +{ + return 0; +} +#endif + #ifndef CONFIG_ARCH_HAS_MEM_ENCRYPT static inline int set_memory_encrypted(unsigned long addr, int numpages) {