From patchwork Wed Apr 6 23:33:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jurgens X-Patchwork-Id: 8767281 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7B6039F372 for ; Wed, 6 Apr 2016 23:35:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 78797201CD for ; Wed, 6 Apr 2016 23:35:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8D753201C8 for ; Wed, 6 Apr 2016 23:35:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752691AbcDFXfT (ORCPT ); Wed, 6 Apr 2016 19:35:19 -0400 Received: from [193.47.165.129] ([193.47.165.129]:33892 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1754709AbcDFXfS (ORCPT ); Wed, 6 Apr 2016 19:35:18 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from danielj@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Apr 2016 02:34:23 +0300 Received: from x-vnc01.mtx.labs.mlnx (x-vnc01.mtx.labs.mlnx [10.12.150.16]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id u36NY4Tq002830; Thu, 7 Apr 2016 02:34:22 +0300 From: Dan Jurgens To: selinux@tycho.nsa.gov, linux-security-module@vger.kernel.org, linux-rdma@vger.kernel.org Cc: yevgenyp@mellanox.com, Daniel Jurgens Subject: [RFC PATCH v2 11/13] ib/core: Enforce Infiniband device SMI security Date: Thu, 7 Apr 2016 02:33:56 +0300 Message-Id: <1459985638-37233-12-git-send-email-danielj@mellanox.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1459985638-37233-1-git-send-email-danielj@mellanox.com> References: <1459985638-37233-1-git-send-email-danielj@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Daniel Jurgens During MAD and snoop agent registration for SMI QPs check that the calling process has permission to access the SMI. When sending and receiving MADs check that the agent has access to the SMI if it's on an SMI QP. Because security policy can change it's possible permission was allowed when creating the agent, but no longer is. Signed-off-by: Daniel Jurgens Reviewed-by: Eli Cohen --- drivers/infiniband/core/mad.c | 52 ++++++++++++++++++++++++++++++++++++++++- 1 files changed, 51 insertions(+), 1 deletions(-) diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index 907f8ee..b5f42ad 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -349,6 +349,21 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device, goto error3; } + if (qp_type == IB_QPT_SMI) { + ret2 = security_ibdev_smi(device->name, + port_num, + &mad_agent_priv->agent); + if (ret2) { + dev_err(&device->dev, + "%s: Access Denied. Err: %d\n", + __func__, + ret2); + + ret = ERR_PTR(ret2); + goto error4; + } + } + if (mad_reg_req) { reg_req = kmemdup(mad_reg_req, sizeof *reg_req, GFP_KERNEL); if (!reg_req) { @@ -535,6 +550,22 @@ struct ib_mad_agent *ib_register_mad_snoop(struct ib_device *device, goto error2; } + if (qp_type == IB_QPT_SMI) { + err = security_ibdev_smi(device->name, + port_num, + &mad_snoop_priv->agent); + + if (err) { + dev_err(&device->dev, + "%s: Access Denied. Err: %d\n", + __func__, + err); + + ret = ERR_PTR(err); + goto error3; + } + } + /* Now, fill in the various structures */ mad_snoop_priv->qp_info = &port_priv->qp_info[qpn]; mad_snoop_priv->agent.device = device; @@ -1248,6 +1279,7 @@ int ib_post_send_mad(struct ib_mad_send_buf *send_buf, /* Walk list of send WRs and post each on send list */ for (; send_buf; send_buf = next_send_buf) { + int err = 0; mad_send_wr = container_of(send_buf, struct ib_mad_send_wr_private, @@ -1255,6 +1287,16 @@ int ib_post_send_mad(struct ib_mad_send_buf *send_buf, mad_agent_priv = mad_send_wr->mad_agent_priv; pkey_index = mad_send_wr->send_wr.pkey_index; + if (mad_agent_priv->agent.qp->qp_type == IB_QPT_SMI) + err = security_ibdev_smi(mad_agent_priv->agent.device->name, + mad_agent_priv->agent.port_num, + &mad_agent_priv->agent); + + if (err) { + ret = err; + goto error; + } + ret = ib_security_enforce_mad_agent_pkey_access( mad_agent_priv->agent.device, mad_agent_priv->agent.port_num, @@ -1997,7 +2039,15 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv, struct ib_mad_send_wr_private *mad_send_wr; struct ib_mad_send_wc mad_send_wc; unsigned long flags; - int ret; + int ret = 0; + + if (mad_agent_priv->agent.qp->qp_type == IB_QPT_SMI) + ret = security_ibdev_smi(mad_agent_priv->agent.device->name, + mad_agent_priv->agent.port_num, + &mad_agent_priv->agent); + + if (ret) + goto security_error; ret = ib_security_enforce_mad_agent_pkey_access( mad_agent_priv->agent.device,