From patchwork Fri Feb 25 20:30:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 12760890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3765AC433EF for ; Fri, 25 Feb 2022 20:27:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236640AbiBYU2M (ORCPT ); Fri, 25 Feb 2022 15:28:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236760AbiBYU2G (ORCPT ); Fri, 25 Feb 2022 15:28:06 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA7A95AA77 for ; Fri, 25 Feb 2022 12:27:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645820851; x=1677356851; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vJfU25va0XVUT6gerN2Bla++OyHA+pqT0COiXXp8kuw=; b=TjkxWxNgTQJCF+4bSHGxY2aR+HnogEB9CYH2T09gDdGV/wg4WLWT4Y5K e0czY8s16IDOHoGoikEgGF4iglQgOpAbWOD00euKw7O3ARto7j6KqFKex 6kKrWf4mHPtNiA9JGX12JhdRCh39Q0uA9GiiK6gajytzUlawMYIyvT9iq p3AgSUqS9RAZcwPP0W7WWaAlHihrY5bIzpbl9F18oGAC5Ahane/pXNe2e 5q8gIFXwC24rZk+9prXASFATol/t1rdQN21YMqRjm4JjotIVbCuCrZ6vH kWkMhZLvdqT0nbQjE/bIfOpuLhHEXMM8bfu5kfrU4/A8R8TXh4W9IrKWq w==; X-IronPort-AV: E=McAfee;i="6200,9189,10269"; a="232546066" X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="232546066" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:31 -0800 X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="607868802" Received: from alison-desk.jf.intel.com (HELO localhost) ([10.54.74.41]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:31 -0800 From: alison.schofield@intel.com To: Ben Widawsky , Dan Williams , Ira Weiny , Vishal Verma Cc: Alison Schofield , linux-cxl@vger.kernel.org Subject: [PATCH v2 1/4] cxl/mbox: Move cxl_mem_command construction to helper funcs Date: Fri, 25 Feb 2022 12:30:58 -0800 Message-Id: <84c0ddda79331bbbc37a6237e980b53735c29c77.1645817416.git.alison.schofield@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Alison Schofield Sanitizing and constructing a cxl_mem_command from a userspace command is part of the validation process prior to submitting the command to a CXL device. Move this work to helper functions: cxl_to_mem_cmd(), cxl_to_mem_cmd_raw(). This declutters cxl_validate_cmd_from_user() in preparation for adding new validation steps. Signed-off-by: Alison Schofield --- drivers/cxl/core/mbox.c | 143 ++++++++++++++++++++++------------------ 1 file changed, 79 insertions(+), 64 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index be61a0d8016b..06fbe6d079ba 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -207,6 +207,75 @@ static bool cxl_mem_raw_command_allowed(u16 opcode) return true; } +static int cxl_to_mem_cmd_raw(struct cxl_dev_state *cxlds, + const struct cxl_send_command *send_cmd, + struct cxl_mem_command *mem_cmd) +{ + if (send_cmd->raw.rsvd) + return -EINVAL; + /* + * Unlike supported commands, the output size of RAW commands + * gets passed along without further checking, so it must be + * validated here. + */ + if (send_cmd->out.size > cxlds->payload_size) + return -EINVAL; + + if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) + return -EPERM; + + dev_WARN_ONCE(cxlds->dev, true, "raw command path used\n"); + + mem_cmd->info.id = CXL_MEM_COMMAND_ID_RAW; + mem_cmd->info.flags = 0; + mem_cmd->info.size_in = send_cmd->in.size; + mem_cmd->info.size_out = send_cmd->out.size; + mem_cmd->opcode = send_cmd->raw.opcode; + + return 0; +} + +static int cxl_to_mem_cmd(struct cxl_dev_state *cxlds, + const struct cxl_send_command *send_cmd, + struct cxl_mem_command *mem_cmd) +{ + const struct cxl_command_info *info; + struct cxl_mem_command *c; + + if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK) + return -EINVAL; + + if (send_cmd->rsvd) + return -EINVAL; + + if (send_cmd->in.rsvd || send_cmd->out.rsvd) + return -EINVAL; + + /* Convert user's command into the internal representation */ + c = &cxl_mem_commands[send_cmd->id]; + info = &c->info; + + /* Check that the command is enabled for hardware */ + if (!test_bit(info->id, cxlds->enabled_cmds)) + return -ENOTTY; + + /* Check that the command is not claimed for exclusive kernel use */ + if (test_bit(info->id, cxlds->exclusive_cmds)) + return -EBUSY; + + /* Check the input buffer is the expected size */ + if (info->size_in >= 0 && info->size_in != send_cmd->in.size) + return -ENOMEM; + + /* Check the output buffer is at least large enough */ + if (info->size_out >= 0 && send_cmd->out.size < info->size_out) + return -ENOMEM; + + memcpy(mem_cmd, c, sizeof(*c)); + mem_cmd->info.size_in = send_cmd->in.size; + return 0; +} + /** * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND. * @cxlds: The device data for the operation @@ -230,8 +299,8 @@ static int cxl_validate_cmd_from_user(struct cxl_dev_state *cxlds, const struct cxl_send_command *send_cmd, struct cxl_mem_command *out_cmd) { - const struct cxl_command_info *info; - struct cxl_mem_command *c; + struct cxl_mem_command mem_cmd; + int rc; if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) return -ENOTTY; @@ -244,70 +313,16 @@ static int cxl_validate_cmd_from_user(struct cxl_dev_state *cxlds, if (send_cmd->in.size > cxlds->payload_size) return -EINVAL; - /* - * Checks are bypassed for raw commands but a WARN/taint will occur - * later in the callchain - */ - if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) { - const struct cxl_mem_command temp = { - .info = { - .id = CXL_MEM_COMMAND_ID_RAW, - .flags = 0, - .size_in = send_cmd->in.size, - .size_out = send_cmd->out.size, - }, - .opcode = send_cmd->raw.opcode - }; + /* Sanitize and construct a cxl_mem_command */ + if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) + rc = cxl_to_mem_cmd_raw(cxlds, send_cmd, &mem_cmd); + else + rc = cxl_to_mem_cmd(cxlds, send_cmd, &mem_cmd); - if (send_cmd->raw.rsvd) - return -EINVAL; + if (rc) + return rc; - /* - * Unlike supported commands, the output size of RAW commands - * gets passed along without further checking, so it must be - * validated here. - */ - if (send_cmd->out.size > cxlds->payload_size) - return -EINVAL; - - if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) - return -EPERM; - - memcpy(out_cmd, &temp, sizeof(temp)); - - return 0; - } - - if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK) - return -EINVAL; - - if (send_cmd->rsvd) - return -EINVAL; - - if (send_cmd->in.rsvd || send_cmd->out.rsvd) - return -EINVAL; - - /* Convert user's command into the internal representation */ - c = &cxl_mem_commands[send_cmd->id]; - info = &c->info; - - /* Check that the command is enabled for hardware */ - if (!test_bit(info->id, cxlds->enabled_cmds)) - return -ENOTTY; - - /* Check that the command is not claimed for exclusive kernel use */ - if (test_bit(info->id, cxlds->exclusive_cmds)) - return -EBUSY; - - /* Check the input buffer is the expected size */ - if (info->size_in >= 0 && info->size_in != send_cmd->in.size) - return -ENOMEM; - - /* Check the output buffer is at least large enough */ - if (info->size_out >= 0 && send_cmd->out.size < info->size_out) - return -ENOMEM; - - memcpy(out_cmd, c, sizeof(*c)); + memcpy(out_cmd, &mem_cmd, sizeof(mem_cmd)); out_cmd->info.size_in = send_cmd->in.size; /* * XXX: out_cmd->info.size_out will be controlled by the driver, and the From patchwork Fri Feb 25 20:30:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 12760889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61F4CC433FE for ; Fri, 25 Feb 2022 20:27:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236614AbiBYU2L (ORCPT ); Fri, 25 Feb 2022 15:28:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236763AbiBYU2G (ORCPT ); Fri, 25 Feb 2022 15:28:06 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD43F5D5C0 for ; Fri, 25 Feb 2022 12:27:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645820852; x=1677356852; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JwoGnQnrgGEepytCDoXSrxL6wHnt/wNvN0osXu4J7b0=; b=g1Ra8uXNWw+VVe51TqQqSinxtNtjfP7/56Vx3K/Jgd7ETrBwEdTpQVxN 85+zmOxk971DQnhYdPC3K7Z/MAzCXOjiUHczlkp4Jb78jx6gitRQ/86l/ /aCxtwZqRBqbyv62UTjY0X4d9dfSrfSWuHNS6S16fB3iFaSRe1ZdaN2pp qfBOyKEUhtzx+UuyptR2yF7hlW4ZmzHAK+lOYbb7XFMoSET6Ju0DmyNvU 6/DnhrkvIh595rOt79GfJ34C6Q3s3wqv1yW3RLDuGy/SKBbHOJONm57Tl PVBZsPpoW3jDD3IB1ImHSOlTwyFh/hSyCeAyOk89G4fcsV4gHuhHS7VlO g==; X-IronPort-AV: E=McAfee;i="6200,9189,10269"; a="232546067" X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="232546067" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:31 -0800 X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="607868805" Received: from alison-desk.jf.intel.com (HELO localhost) ([10.54.74.41]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:31 -0800 From: alison.schofield@intel.com To: Ben Widawsky , Dan Williams , Ira Weiny , Vishal Verma Cc: Alison Schofield , linux-cxl@vger.kernel.org Subject: [PATCH v2 2/4] cxl/mbox: Centralize the validation of user commands Date: Fri, 25 Feb 2022 12:30:59 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Alison Schofield The validation of a user command is primarily, but not exclusively, performed in cxl_validate_cmd_from_user(). Other functions in the send path perform checks as the command is prepared for submission to the device. Centralize the command validation work in cxl_validate_cmd_from_user(). Make it return a valid cxl_mbox_cmd that is subsequently consumed by handle_mailbox_cmd_from_user(). This reorganization is in preparation for performing additional validation on user commands. Signed-off-by: Alison Schofield --- drivers/cxl/core/mbox.c | 127 ++++++++++++++++++---------------------- 1 file changed, 58 insertions(+), 69 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 06fbe6d079ba..e0140864a9fd 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -277,10 +277,11 @@ static int cxl_to_mem_cmd(struct cxl_dev_state *cxlds, } /** - * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND. + * cxl_validate_cmd_from_user() - Construct a valid cxl_mbox_cmd from + * the users cxl_send_command. * @cxlds: The device data for the operation * @send_cmd: &struct cxl_send_command copied in from userspace. - * @out_cmd: Sanitized and populated &struct cxl_mem_command. + * @mbox_cmd: Sanitized and populated &struct cxl_mbox_cmd. * * Return: * * %0 - @out_cmd is ready to send. @@ -290,14 +291,14 @@ static int cxl_to_mem_cmd(struct cxl_dev_state *cxlds, * * %-EPERM - Attempted to use a protected command. * * %-EBUSY - Kernel has claimed exclusive access to this opcode * - * The result of this command is a fully validated command in @out_cmd that is + * The result of this command is a fully validated mailbox command that is * safe to send to the hardware. * * See handle_mailbox_cmd_from_user() */ static int cxl_validate_cmd_from_user(struct cxl_dev_state *cxlds, const struct cxl_send_command *send_cmd, - struct cxl_mem_command *out_cmd) + struct cxl_mbox_cmd *mbox_cmd) { struct cxl_mem_command mem_cmd; int rc; @@ -322,13 +323,34 @@ static int cxl_validate_cmd_from_user(struct cxl_dev_state *cxlds, if (rc) return rc; - memcpy(out_cmd, &mem_cmd, sizeof(mem_cmd)); - out_cmd->info.size_in = send_cmd->in.size; - /* - * XXX: out_cmd->info.size_out will be controlled by the driver, and the - * specified number of bytes @send_cmd->out.size will be copied back out - * to userspace. - */ + /* Construct the cxl_mbox_cmd */ + memset(mbox_cmd, 0, sizeof(*mbox_cmd)); + mbox_cmd->opcode = mem_cmd.opcode; + mbox_cmd->size_in = mem_cmd.info.size_in; + + if (!mbox_cmd->size_in) + goto size_out; + + mbox_cmd->payload_in = vmemdup_user(u64_to_user_ptr(send_cmd->in.payload), + mbox_cmd->size_in); + if (IS_ERR(mbox_cmd->payload_in)) + return PTR_ERR(mbox_cmd->payload_in); + +size_out: + /* Prepare to handle a full payload for variable sized output */ + if (mem_cmd.info.size_out < 0) + mbox_cmd->size_out = cxlds->payload_size; + else + mbox_cmd->size_out = mem_cmd.info.size_out; + + if (mbox_cmd->size_out) { + mbox_cmd->payload_out = kvzalloc(mbox_cmd->size_out, + GFP_KERNEL); + if (!mbox_cmd->payload_out) { + kvfree(mbox_cmd->payload_in); + return -ENOMEM; + } + } return 0; } @@ -370,67 +392,33 @@ int cxl_query_cmd(struct cxl_memdev *cxlmd, /** * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace. * @cxlds: The device data for the operation - * @cmd: The validated command. - * @in_payload: Pointer to userspace's input payload. + * @mbox_cmd: The validated mailbox command ready to send. * @out_payload: Pointer to userspace's output payload. - * @size_out: (Input) Max payload size to copy out. - * (Output) Payload size hardware generated. + * @size_out: (Output) Payload size hardware generated. * @retval: Hardware generated return code from the operation. * * Return: * * %0 - Mailbox transaction succeeded. This implies the mailbox * protocol completed successfully not that the operation itself * was successful. - * * %-ENOMEM - Couldn't allocate a bounce buffer. * * %-EFAULT - Something happened with copy_to/from_user. * * %-EINTR - Mailbox acquisition interrupted. * * %-EXXX - Transaction level failures. * - * Creates the appropriate mailbox command and dispatches it on behalf of a - * userspace request. The input and output payloads are copied between - * userspace. + * Dispatches the mailbox command on behalf of a userspace request. + * The output payload is copied to userspace. * * See cxl_send_cmd(). */ static int handle_mailbox_cmd_from_user(struct cxl_dev_state *cxlds, - const struct cxl_mem_command *cmd, - u64 in_payload, u64 out_payload, - s32 *size_out, u32 *retval) + struct cxl_mbox_cmd *mbox_cmd, + u64 out_payload, s32 *size_out, + u32 *retval) { struct device *dev = cxlds->dev; - struct cxl_mbox_cmd mbox_cmd = { - .opcode = cmd->opcode, - .size_in = cmd->info.size_in, - .size_out = cmd->info.size_out, - }; int rc; - if (cmd->info.size_out) { - mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL); - if (!mbox_cmd.payload_out) - return -ENOMEM; - } - - if (cmd->info.size_in) { - mbox_cmd.payload_in = vmemdup_user(u64_to_user_ptr(in_payload), - cmd->info.size_in); - if (IS_ERR(mbox_cmd.payload_in)) { - kvfree(mbox_cmd.payload_out); - return PTR_ERR(mbox_cmd.payload_in); - } - } - - dev_dbg(dev, - "Submitting %s command for user\n" - "\topcode: %x\n" - "\tsize: %ub\n", - cxl_command_names[cmd->info.id].name, mbox_cmd.opcode, - cmd->info.size_in); - - dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW, - "raw command path used\n"); - - rc = cxlds->mbox_send(cxlds, &mbox_cmd); + rc = cxlds->mbox_send(cxlds, mbox_cmd); if (rc) goto out; @@ -439,22 +427,21 @@ static int handle_mailbox_cmd_from_user(struct cxl_dev_state *cxlds, * to userspace. While the payload may have written more output than * this it will have to be ignored. */ - if (mbox_cmd.size_out) { - dev_WARN_ONCE(dev, mbox_cmd.size_out > *size_out, + if (mbox_cmd->size_out) { + dev_WARN_ONCE(dev, mbox_cmd->size_out > *size_out, "Invalid return size\n"); if (copy_to_user(u64_to_user_ptr(out_payload), - mbox_cmd.payload_out, mbox_cmd.size_out)) { + mbox_cmd->payload_out, mbox_cmd->size_out)) { rc = -EFAULT; goto out; } } - *size_out = mbox_cmd.size_out; - *retval = mbox_cmd.return_code; - + *size_out = mbox_cmd->size_out; + *retval = mbox_cmd->return_code; out: - kvfree(mbox_cmd.payload_in); - kvfree(mbox_cmd.payload_out); + kvfree(mbox_cmd->payload_in); + kvfree(mbox_cmd->payload_out); return rc; } @@ -463,7 +450,7 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) struct cxl_dev_state *cxlds = cxlmd->cxlds; struct device *dev = &cxlmd->dev; struct cxl_send_command send; - struct cxl_mem_command c; + struct cxl_mbox_cmd mbox_cmd; int rc; dev_dbg(dev, "Send IOCTL\n"); @@ -471,17 +458,19 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) if (copy_from_user(&send, s, sizeof(send))) return -EFAULT; - rc = cxl_validate_cmd_from_user(cxlmd->cxlds, &send, &c); + rc = cxl_validate_cmd_from_user(cxlmd->cxlds, &send, &mbox_cmd); if (rc) return rc; - /* Prepare to handle a full payload for variable sized output */ - if (c.info.size_out < 0) - c.info.size_out = cxlds->payload_size; + dev_dbg(dev, + "Submitting %s command for user\n" + "\topcode: %x\n" + "\tsize: %zx\n", + cxl_command_names[send.id].name, + mbox_cmd.opcode, mbox_cmd.size_in); - rc = handle_mailbox_cmd_from_user(cxlds, &c, send.in.payload, - send.out.payload, &send.out.size, - &send.retval); + rc = handle_mailbox_cmd_from_user(cxlds, &mbox_cmd, send.out.payload, + &send.out.size, &send.retval); if (rc) return rc; From patchwork Fri Feb 25 20:31:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 12760888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A35F5C433F5 for ; Fri, 25 Feb 2022 20:27:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236349AbiBYU2K (ORCPT ); Fri, 25 Feb 2022 15:28:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236773AbiBYU2I (ORCPT ); Fri, 25 Feb 2022 15:28:08 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2690F6D873 for ; Fri, 25 Feb 2022 12:27:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645820855; x=1677356855; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fyhnqDCUnI/UcszZVraCex4DH46dZXpE5dHPVRO42KM=; b=K+yxUEqLobgYoEFWwwNjGuHfg9loT514lVA52KM5UbXox0PL7LJuaqZb PimwqkatU9Z27wIS8sFgtNlT4n5vlyma+BQVIq4N0BNd7SqHoctI1NRB4 7kG9JH/lZ/wgFamPZn7outJ32oBa0rfj1G7LDmylR3DseUt9omzuWf0OO HVxkKhkz0k28CjZTwTFwdrJETa61wB30/RGHQcqn5cHHm/BX4Tf1VpurV 6tVLiVPX4MdnABmbv2Gz9bstgYLFEts5rB+dpz7sU0/G12oxB44GkOE2E 7ryvsNFIlAhURTQj9LPWdWa3omCjP7h4c3dC8MnrdKE2sYdZW6exp9QzK Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10269"; a="232546069" X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="232546069" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:32 -0800 X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="607868808" Received: from alison-desk.jf.intel.com (HELO localhost) ([10.54.74.41]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:32 -0800 From: alison.schofield@intel.com To: Ben Widawsky , Dan Williams , Ira Weiny , Vishal Verma Cc: Alison Schofield , linux-cxl@vger.kernel.org Subject: [PATCH v2 3/4] cxl/mbox: Block immediate mode in SET_PARTITION_INFO command Date: Fri, 25 Feb 2022 12:31:00 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Alison Schofield User space may send the SET_PARTITION_INFO mailbox command using the IOCTL interface. Inspect the input payload and fail if the immediate flag is set. This is the first instance of the driver inspecting an input payload from user space. Assume there will be more such cases and implement with an extensible helper. In order for the kernel to react to an immediate partition change it needs to assert that the change will not affect any active decode. At a minimum this requires validating that the device is using HDM decoders instead of the CXL DVSEC for decode, and that none of the active HDM decoders are affected by the partition change. For now, just fail until that support arrives. Signed-off-by: Alison Schofield --- drivers/cxl/core/mbox.c | 42 +++++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 7 +++++++ 2 files changed, 49 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index e0140864a9fd..b49341d7b126 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -207,6 +207,40 @@ static bool cxl_mem_raw_command_allowed(u16 opcode) return true; } +/** + * cxl_payload_from_user_allowed() - Check contents of in_payload. + * @opcode: The mailbox command opcode. + * @payload_in: Pointer to the input payload passed in from user space. + * + * Return: + * * true - payload_in passes check for @opcode. + * * false - payload_in contains invalid or unsupported values. + * + * The driver may inspect payload contents before sending a mailbox + * command from user space to the device. The intent is to reject + * commands with input payloads that are known to be unsafe. This + * check is not intended to replace the users careful selection of + * mailbox command parameters and makes no guarantee that the user + * command will succeed, nor that it is appropriate. + * + * The specific checks are determined by the opcode. + */ +static bool cxl_payload_from_user_allowed(u16 opcode, void *payload_in) +{ + switch (opcode) { + case CXL_MBOX_OP_SET_PARTITION_INFO: { + struct cxl_mbox_set_partition_info *pi = payload_in; + + if (pi->flags && CXL_SET_PARTITION_IMMEDIATE_FLAG) + return false; + break; + } + default: + break; + } + return true; +} + static int cxl_to_mem_cmd_raw(struct cxl_dev_state *cxlds, const struct cxl_send_command *send_cmd, struct cxl_mem_command *mem_cmd) @@ -336,6 +370,14 @@ static int cxl_validate_cmd_from_user(struct cxl_dev_state *cxlds, if (IS_ERR(mbox_cmd->payload_in)) return PTR_ERR(mbox_cmd->payload_in); + if (!cxl_payload_from_user_allowed(mbox_cmd->opcode, + mbox_cmd->payload_in)) { + dev_dbg(cxlds->dev, "%s: input payload not allowed\n", + cxl_command_names[mem_cmd.info.id].name); + kvfree(mbox_cmd->payload_in); + return -EBUSY; + } + size_out: /* Prepare to handle a full payload for variable sized output */ if (mem_cmd.info.size_out < 0) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index d5c9a273d07d..db3c20e29def 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -264,6 +264,13 @@ struct cxl_mbox_set_lsa { u8 data[]; } __packed; +struct cxl_mbox_set_partition_info { + u64 volatile_capacity; + u8 flags; +} __packed; + +#define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0) + /** * struct cxl_mem_command - Driver representation of a memory device command * @info: Command information as it exists for the UAPI From patchwork Fri Feb 25 20:31:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 12760887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF6C6C433EF for ; Fri, 25 Feb 2022 20:27:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236592AbiBYU2L (ORCPT ); Fri, 25 Feb 2022 15:28:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236779AbiBYU2I (ORCPT ); Fri, 25 Feb 2022 15:28:08 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41CEF71C8A for ; Fri, 25 Feb 2022 12:27:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645820855; x=1677356855; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7fonY0Gp8YIdQSKGFtkuDPnP3QHOTzZLEgizHVlgg/Y=; b=BClu60OFHF6LjXLHlv1V44xKksTvegqcQnq7ZAPpI0bXlp92cdIWq+bX kTY+lv5c3fSNqUsW1T31L0vVPBBftwaWwv5G+7iJzTUdcDsLBwbMQ/Vp+ BdOad74KkDVIqwScKMQgwdRIZeAC+Yky+jtsxtHyCndWguyKZOjyBJZFB vSmva98hswEMhv+TfwyE8FLtfTNIGUQkLkz7axi6v7HZnKQU3st8J+8IY derHAUVOww6LHK67UK3tuglE/2k6toM26IqMNQ78kmMd124ag0oPEkIrc VBV1jipbjqJpnU+99d8vq8jVUkRlDNg8Ym4qHH5Uvapf83H4pNH7rQfH/ A==; X-IronPort-AV: E=McAfee;i="6200,9189,10269"; a="232546070" X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="232546070" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:32 -0800 X-IronPort-AV: E=Sophos;i="5.90,137,1643702400"; d="scan'208";a="607868811" Received: from alison-desk.jf.intel.com (HELO localhost) ([10.54.74.41]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2022 12:27:32 -0800 From: alison.schofield@intel.com To: Ben Widawsky , Dan Williams , Ira Weiny , Vishal Verma Cc: Alison Schofield , linux-cxl@vger.kernel.org Subject: [PATCH v2 4/4] cxl/pmem: Remove CXL SET_PARTITION_INFO from exclusive_cmds list Date: Fri, 25 Feb 2022 12:31:01 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Alison Schofield With SET_PARTITION_INFO on the exclusive_cmds list for the CXL_PMEM driver, userspace cannot execute a set-partition command without first unbinding the pmem driver from the device. When userspace requests a partition change to take effect on the next reboot this unbind requirement is unnecessarily restrictive. The driver does not need to enforce quiescing of the device before setting up the 'next' partitions. Of course, userspace still needs to be aware that changing the size of persistent capacity on the next reboot will result in the loss of data stored. That can happen regardless of whether it is presently bound/unbound at the time of issuing the set-partition command. When userspace requests a partition change to take effect immediately, restrictions are needed. The CXL_MEM driver currently blocks the usage of immediate mode, making the presence of SET_PARTITION_INFO on this exclusive commands list redundant. In the future, when the CXL_MEM driver adds support for immediate changes to device partitions it will ensure that the partition change will not affect any active decode. That means the work will not fall right back here, onto the CXL_PMEM driver. Signed-off-by: Alison Schofield --- drivers/cxl/pmem.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c index fabdb0c6dbf2..73a2868b5f95 100644 --- a/drivers/cxl/pmem.c +++ b/drivers/cxl/pmem.c @@ -344,7 +344,6 @@ static __init int cxl_pmem_init(void) { int rc; - set_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, exclusive_cmds); set_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, exclusive_cmds); set_bit(CXL_MEM_COMMAND_ID_SET_LSA, exclusive_cmds);