From patchwork Sun Jan 30 21:18:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 12730153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB5FBC433F5 for ; Sun, 30 Jan 2022 21:22:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E49A6B00AE; Sun, 30 Jan 2022 16:22:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 01DBA6B00AF; Sun, 30 Jan 2022 16:22:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8E416B00B0; Sun, 30 Jan 2022 16:22:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id C491F6B00AE for ; Sun, 30 Jan 2022 16:22:06 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7DB92181DF763 for ; Sun, 30 Jan 2022 21:22:06 +0000 (UTC) X-FDA: 79088226252.22.F120425 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf16.hostedemail.com (Postfix) with ESMTP id B4643180003 for ; Sun, 30 Jan 2022 21:22:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643577725; x=1675113725; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=t3JrZFhF1CwcBiNV2/x8DFu6hUw9NbjzQecxrm091YQ=; b=Zi55VurW3ztXxfx50VbxmVVTH5a8vSlwIomTmC/ts/CuuipjysO2Z6N5 Yu6eJfcB+YQIidkypnIA5f6LDTIMzR7YATK6lfx2IPTpmc0UGddISw2Hr MzXRKa+YFL/Xx+ufa1/UHF+tt+0uF7WlHnicClSJryRiS5c9n6EK+MZxh EimdL97lcO0qvNhudvR6Qr2eQFurJhudc3S7u59vuDj7zG0uRSf9KvBVy 5ZVVLojAdlQaLh/A9VDuJFKobHy+nfGhYJVLj4oKpBdYqIVbbGjthxWCu IhO5GXuC75i38mbdd3vTJD4udn+unyf3afW+L8vYn/5f8TAnkeLx3X71H Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10243"; a="244970221" X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="244970221" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:22:04 -0800 X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="536856868" Received: from avmallar-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.123.171]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:22:03 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Dave Martin , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH 23/35] x86/fpu: Add helpers for modifying supervisor xstate Date: Sun, 30 Jan 2022 13:18:26 -0800 Message-Id: <20220130211838.8382-24-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220130211838.8382-1-rick.p.edgecombe@intel.com> References: <20220130211838.8382-1-rick.p.edgecombe@intel.com> Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Zi55VurW; spf=none (imf16.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: nil X-Rspamd-Queue-Id: B4643180003 X-Stat-Signature: 5nhqam5u58wdw7mda5ncim4yxexjdcoy X-Rspamd-Server: rspam12 X-HE-Tag: 1643577725-189265 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add helpers that can be used to modify supervisor xstate safely for the current task. State for supervisors xstate based features can be live and accesses via MSR's, or saved in memory in an xsave buffer. When the kernel needs to modify this state it needs to be sure to operate on it in the right place, so the modifications don't get clobbered. In the past supervisor xstate features have used get_xsave_addr() directly, and performed open coded logic handle operating on the saved state correctly. This has posed two problems: 1. It has logic that has been gotten wrong more than once. 2. To reduce code, less common path's are not optimized. Determination of which path's are less common is based on assumptions about far away code that could change. In addition, now that get_xsave_addr() is not available outside of the core fpu code, there isn't even a way for these supervisor features to modify the in memory state. To resolve these problems, add some helpers that encapsulate the correct logic to operate on the correct copy of the state. Map the MSR's to the struct field location in a case statements in __get_xsave_member(). Use the helpers like this, to write to either the MSR or saved state: void *xstate; xstate = start_update_xsave_msrs(XFEATURE_FOO); r = xsave_rdmsrl(state, MSR_IA32_FOO_1, &val) if (r) xsave_wrmsrl(state, MSR_IA32_FOO_2, FOO_ENABLE); end_update_xsave_msrs(); Signed-off-by: Rick Edgecombe --- v1: - New patch. arch/x86/include/asm/fpu/api.h | 5 ++ arch/x86/kernel/fpu/xstate.c | 134 +++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index c83b3020350a..6aec27984b62 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -165,4 +165,9 @@ static inline bool fpstate_is_confidential(struct fpu_guest *gfpu) struct task_struct; extern long fpu_xstate_prctl(struct task_struct *tsk, int option, unsigned long arg2); +void *start_update_xsave_msrs(int xfeature_nr); +void end_update_xsave_msrs(void); +int xsave_rdmsrl(void *state, unsigned int msr, unsigned long long *p); +int xsave_wrmsrl(void *state, u32 msr, u64 val); +int xsave_set_clear_bits_msrl(void *state, u32 msr, u64 set, u64 clear); #endif /* _ASM_X86_FPU_API_H */ diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 44397202762b..c5e20e0d0725 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1867,3 +1867,137 @@ int proc_pid_arch_status(struct seq_file *m, struct pid_namespace *ns, return 0; } #endif /* CONFIG_PROC_PID_ARCH_STATUS */ + +static u64 *__get_xsave_member(void *xstate, u32 msr) +{ + switch (msr) { + /* Currently there are no MSR's supported */ + default: + WARN_ONCE(1, "x86/fpu: unsupported xstate msr (%u)\n", msr); + return NULL; + } +} + +/* + * Return a pointer to the xstate for the feature if it should be used, or NULL + * if the MSRs should be written to directly. To do this safely, using the + * associated read/write helpers is required. + */ +void *start_update_xsave_msrs(int xfeature_nr) +{ + void *xstate; + + /* + * fpregs_lock() only disables preemption (mostly). So modifing state + * in an interrupt could screw up some in progress fpregs operation, + * but appear to work. Warn about it. + */ + WARN_ON_ONCE(!in_task()); + WARN_ON_ONCE(current->flags & PF_KTHREAD); + + fpregs_lock(); + + fpregs_assert_state_consistent(); + + /* + * If the registers don't need to be reloaded. Go ahead and operate on the + * registers. + */ + if (!test_thread_flag(TIF_NEED_FPU_LOAD)) + return NULL; + + xstate = get_xsave_addr(¤t->thread.fpu.fpstate->regs.xsave, xfeature_nr); + + /* + * If regs are in the init state, they can't be retrieved from + * init_fpstate due to the init optimization, but are not nessarily + * zero. The only option is to restore to make everything live and + * operate on registers. This will clear TIF_NEED_FPU_LOAD. + * + * Otherwise, if not in the init state but TIF_NEED_FPU_LOAD is set, + * operate on the buffer. The registers will be restored before going + * to userspace in any case, but the task might get preempted before + * then, so this possibly saves an xsave. + */ + if (!xstate) + fpregs_restore_userregs(); + return xstate; +} + +void end_update_xsave_msrs(void) +{ + fpregs_unlock(); +} + +/* + * When TIF_NEED_FPU_LOAD is set and fpregs_state_valid() is true, the saved + * state and fp state match. In this case, the kernel has some good options - + * it can skip the restore before returning to userspace or it could skip + * an xsave if preempted before then. + * + * But if this correspondence is broken by either a write to the in-memory + * buffer or the registers, the kernel needs to be notified so it doesn't miss + * an xsave or restore. __xsave_msrl_prepare_write() peforms this check and + * notifies the kernel if needed. Use before writes only, to not take away + * the kernel's options when not required. + * + * If TIF_NEED_FPU_LOAD is set, then the logic in start_update_xsave_msrs() + * must have resulted in targeting the in-memory state, so invaliding the + * registers is the right thing to do. + */ +static void __xsave_msrl_prepare_write(void) +{ + if (test_thread_flag(TIF_NEED_FPU_LOAD) && + fpregs_state_valid(¤t->thread.fpu, smp_processor_id())) + __fpu_invalidate_fpregs_state(¤t->thread.fpu); +} + +int xsave_rdmsrl(void *xstate, unsigned int msr, unsigned long long *p) +{ + u64 *member_ptr; + + if (!xstate) + return rdmsrl_safe(msr, p); + + member_ptr = __get_xsave_member(xstate, msr); + if (!member_ptr) + return 1; + + *p = *member_ptr; + + return 0; +} + +int xsave_wrmsrl(void *xstate, u32 msr, u64 val) +{ + u64 *member_ptr; + + __xsave_msrl_prepare_write(); + if (!xstate) + return wrmsrl_safe(msr, val); + + member_ptr = __get_xsave_member(xstate, msr); + if (!member_ptr) + return 1; + + *member_ptr = val; + + return 0; +} + +int xsave_set_clear_bits_msrl(void *xstate, u32 msr, u64 set, u64 clear) +{ + u64 val, new_val; + int ret; + + ret = xsave_rdmsrl(xstate, msr, &val); + if (ret) + return ret; + + new_val = (val & ~clear) | set; + + if (new_val != val) + return xsave_wrmsrl(xstate, msr, new_val); + + return 0; +}