From patchwork Wed Jan 6 18:23:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dennis Dalessandro X-Patchwork-Id: 7969861 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 88B78BEEED for ; Wed, 6 Jan 2016 18:23:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 259E12013A for ; Wed, 6 Jan 2016 18:23:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C4E7B20125 for ; Wed, 6 Jan 2016 18:23:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752087AbcAFSXI (ORCPT ); Wed, 6 Jan 2016 13:23:08 -0500 Received: from mga04.intel.com ([192.55.52.120]:18044 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752003AbcAFSXG (ORCPT ); Wed, 6 Jan 2016 13:23:06 -0500 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP; 06 Jan 2016 10:23:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,530,1444719600"; d="scan'208";a="875839732" Received: from scymds02.sc.intel.com ([10.82.195.37]) by fmsmga001.fm.intel.com with ESMTP; 06 Jan 2016 10:23:05 -0800 Received: from scvm10.sc.intel.com (scvm10.sc.intel.com [10.82.195.27]) by scymds02.sc.intel.com with ESMTP id u06IN584026219; Wed, 6 Jan 2016 10:23:05 -0800 Received: from scvm10.sc.intel.com (localhost [127.0.0.1]) by scvm10.sc.intel.com with ESMTP id u06IN50D007073; Wed, 6 Jan 2016 10:23:05 -0800 Subject: [PATCH 1/9] IB/qib: Remove ibport and use rdmavt version To: dledford@redhat.com From: Dennis Dalessandro Cc: linux-rdma@vger.kernel.org, Harish Chegondi , Mike Marciniszyn , Ira Weiny Date: Wed, 06 Jan 2016 10:23:05 -0800 Message-ID: <20160106182304.5401.92117.stgit@scvm10.sc.intel.com> In-Reply-To: <20160106181837.5401.67435.stgit@scvm10.sc.intel.com> References: <20160106181837.5401.67435.stgit@scvm10.sc.intel.com> User-Agent: StGit/0.16 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Harish Chegondi Remove several ibport members from qib and use the rdmavt version. rc_acks, rc_qacks, and rc_delayed_comp are defined as per CPU variables in rdmavt. Add support for these rdmavt per CPU variables which were not per cpu variables in qib ibport structure. Reviewed-by: Mike Marciniszyn Reviewed-by: Dennis Dalessandro Reviewed-by: Ira Weiny Signed-off-by: Harish Chegondi --- drivers/infiniband/hw/qib/qib_driver.c | 4 drivers/infiniband/hw/qib/qib_iba6120.c | 8 - drivers/infiniband/hw/qib/qib_iba7322.c | 2 drivers/infiniband/hw/qib/qib_init.c | 10 + drivers/infiniband/hw/qib/qib_mad.c | 240 ++++++++++++++------------- drivers/infiniband/hw/qib/qib_qp.c | 22 +- drivers/infiniband/hw/qib/qib_rc.c | 30 ++- drivers/infiniband/hw/qib/qib_ruc.c | 14 +- drivers/infiniband/hw/qib/qib_sdma.c | 2 drivers/infiniband/hw/qib/qib_sysfs.c | 65 +++++++ drivers/infiniband/hw/qib/qib_uc.c | 2 drivers/infiniband/hw/qib/qib_ud.c | 14 +- drivers/infiniband/hw/qib/qib_verbs.c | 52 +++--- drivers/infiniband/hw/qib/qib_verbs.h | 46 ----- drivers/infiniband/hw/qib/qib_verbs_mcast.c | 28 ++- 15 files changed, 281 insertions(+), 258 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/infiniband/hw/qib/qib_driver.c b/drivers/infiniband/hw/qib/qib_driver.c index eafdee9..e8b239c 100644 --- a/drivers/infiniband/hw/qib/qib_driver.c +++ b/drivers/infiniband/hw/qib/qib_driver.c @@ -379,7 +379,7 @@ static u32 qib_rcv_hdrerr(struct qib_ctxtdata *rcd, struct qib_pportdata *ppd, /* Check for valid receive state. */ if (!(ib_qib_state_ops[qp->state] & QIB_PROCESS_RECV_OK)) { - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; goto unlock; } @@ -399,7 +399,7 @@ static u32 qib_rcv_hdrerr(struct qib_ctxtdata *rcd, struct qib_pportdata *ppd, IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST) { diff = qib_cmp24(psn, qp->r_psn); if (!qp->r_nak_state && diff >= 0) { - ibp->n_rc_seqnak++; + ibp->rvp.n_rc_seqnak++; qp->r_nak_state = IB_NAK_PSN_ERROR; /* Use the expected PSN. */ diff --git a/drivers/infiniband/hw/qib/qib_iba6120.c b/drivers/infiniband/hw/qib/qib_iba6120.c index 4b92780..a3733f2 100644 --- a/drivers/infiniband/hw/qib/qib_iba6120.c +++ b/drivers/infiniband/hw/qib/qib_iba6120.c @@ -2956,13 +2956,13 @@ static void pma_6120_timer(unsigned long data) struct qib_ibport *ibp = &ppd->ibport_data; unsigned long flags; - spin_lock_irqsave(&ibp->lock, flags); + spin_lock_irqsave(&ibp->rvp.lock, flags); if (cs->pma_sample_status == IB_PMA_SAMPLE_STATUS_STARTED) { cs->pma_sample_status = IB_PMA_SAMPLE_STATUS_RUNNING; qib_snapshot_counters(ppd, &cs->sword, &cs->rword, &cs->spkts, &cs->rpkts, &cs->xmit_wait); mod_timer(&cs->pma_timer, - jiffies + usecs_to_jiffies(ibp->pma_sample_interval)); + jiffies + usecs_to_jiffies(ibp->rvp.pma_sample_interval)); } else if (cs->pma_sample_status == IB_PMA_SAMPLE_STATUS_RUNNING) { u64 ta, tb, tc, td, te; @@ -2975,11 +2975,11 @@ static void pma_6120_timer(unsigned long data) cs->rpkts = td - cs->rpkts; cs->xmit_wait = te - cs->xmit_wait; } - spin_unlock_irqrestore(&ibp->lock, flags); + spin_unlock_irqrestore(&ibp->rvp.lock, flags); } /* - * Note that the caller has the ibp->lock held. + * Note that the caller has the ibp->rvp.lock held. */ static void qib_set_cntr_6120_sample(struct qib_pportdata *ppd, u32 intv, u32 start) diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c index 1fbe308..ca28c19 100644 --- a/drivers/infiniband/hw/qib/qib_iba7322.c +++ b/drivers/infiniband/hw/qib/qib_iba7322.c @@ -5497,7 +5497,7 @@ static void try_7322_ipg(struct qib_pportdata *ppd) unsigned delay; int ret; - agent = ibp->send_agent; + agent = ibp->rvp.send_agent; if (!agent) goto retry; diff --git a/drivers/infiniband/hw/qib/qib_init.c b/drivers/infiniband/hw/qib/qib_init.c index 47190f1..5087a1f 100644 --- a/drivers/infiniband/hw/qib/qib_init.c +++ b/drivers/infiniband/hw/qib/qib_init.c @@ -245,6 +245,13 @@ int qib_init_pportdata(struct qib_pportdata *ppd, struct qib_devdata *dd, alloc_percpu(struct qib_pma_counters); if (!ppd->ibport_data.pmastats) return -ENOMEM; + ppd->ibport_data.rvp.rc_acks = alloc_percpu(u64); + ppd->ibport_data.rvp.rc_qacks = alloc_percpu(u64); + ppd->ibport_data.rvp.rc_delayed_comp = alloc_percpu(u64); + if (!(ppd->ibport_data.rvp.rc_acks) || + !(ppd->ibport_data.rvp.rc_qacks) || + !(ppd->ibport_data.rvp.rc_delayed_comp)) + return -ENOMEM; if (qib_cc_table_size < IB_CCT_MIN_ENTRIES) goto bail; @@ -632,6 +639,9 @@ wq_error: static void qib_free_pportdata(struct qib_pportdata *ppd) { free_percpu(ppd->ibport_data.pmastats); + free_percpu(ppd->ibport_data.rvp.rc_acks); + free_percpu(ppd->ibport_data.rvp.rc_qacks); + free_percpu(ppd->ibport_data.rvp.rc_delayed_comp); ppd->ibport_data.pmastats = NULL; } diff --git a/drivers/infiniband/hw/qib/qib_mad.c b/drivers/infiniband/hw/qib/qib_mad.c index 43f8c49..3e8dde2 100644 --- a/drivers/infiniband/hw/qib/qib_mad.c +++ b/drivers/infiniband/hw/qib/qib_mad.c @@ -70,7 +70,7 @@ static void qib_send_trap(struct qib_ibport *ibp, void *data, unsigned len) unsigned long flags; unsigned long timeout; - agent = ibp->send_agent; + agent = ibp->rvp.send_agent; if (!agent) return; @@ -79,7 +79,8 @@ static void qib_send_trap(struct qib_ibport *ibp, void *data, unsigned len) return; /* o14-2 */ - if (ibp->trap_timeout && time_before(jiffies, ibp->trap_timeout)) + if (ibp->rvp.trap_timeout && + time_before(jiffies, ibp->rvp.trap_timeout)) return; send_buf = ib_create_send_mad(agent, 0, 0, 0, IB_MGMT_MAD_HDR, @@ -93,18 +94,18 @@ static void qib_send_trap(struct qib_ibport *ibp, void *data, unsigned len) smp->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED; smp->class_version = 1; smp->method = IB_MGMT_METHOD_TRAP; - ibp->tid++; - smp->tid = cpu_to_be64(ibp->tid); + ibp->rvp.tid++; + smp->tid = cpu_to_be64(ibp->rvp.tid); smp->attr_id = IB_SMP_ATTR_NOTICE; /* o14-1: smp->mkey = 0; */ memcpy(smp->data, data, len); - spin_lock_irqsave(&ibp->lock, flags); + spin_lock_irqsave(&ibp->rvp.lock, flags); if (!ibp->sm_ah) { - if (ibp->sm_lid != be16_to_cpu(IB_LID_PERMISSIVE)) { + if (ibp->rvp.sm_lid != be16_to_cpu(IB_LID_PERMISSIVE)) { struct ib_ah *ah; - ah = qib_create_qp0_ah(ibp, ibp->sm_lid); + ah = qib_create_qp0_ah(ibp, ibp->rvp.sm_lid); if (IS_ERR(ah)) ret = PTR_ERR(ah); else { @@ -118,17 +119,17 @@ static void qib_send_trap(struct qib_ibport *ibp, void *data, unsigned len) send_buf->ah = &ibp->sm_ah->ibah; ret = 0; } - spin_unlock_irqrestore(&ibp->lock, flags); + spin_unlock_irqrestore(&ibp->rvp.lock, flags); if (!ret) ret = ib_post_send_mad(send_buf, NULL); if (!ret) { /* 4.096 usec. */ - timeout = (4096 * (1UL << ibp->subnet_timeout)) / 1000; - ibp->trap_timeout = jiffies + usecs_to_jiffies(timeout); + timeout = (4096 * (1UL << ibp->rvp.subnet_timeout)) / 1000; + ibp->rvp.trap_timeout = jiffies + usecs_to_jiffies(timeout); } else { ib_free_send_mad(send_buf); - ibp->trap_timeout = 0; + ibp->rvp.trap_timeout = 0; } } @@ -141,10 +142,10 @@ void qib_bad_pqkey(struct qib_ibport *ibp, __be16 trap_num, u32 key, u32 sl, struct ib_mad_notice_attr data; if (trap_num == IB_NOTICE_TRAP_BAD_PKEY) - ibp->pkey_violations++; + ibp->rvp.pkey_violations++; else - ibp->qkey_violations++; - ibp->n_pkt_drops++; + ibp->rvp.qkey_violations++; + ibp->rvp.n_pkt_drops++; /* Send violation trap */ data.generic_type = IB_NOTICE_TYPE_SECURITY; @@ -217,8 +218,8 @@ void qib_cap_mask_chg(struct qib_ibport *ibp) data.toggle_count = 0; memset(&data.details, 0, sizeof(data.details)); data.details.ntc_144.lid = data.issuer_lid; - data.details.ntc_144.new_cap_mask = cpu_to_be32(ibp->port_cap_flags); - + data.details.ntc_144.new_cap_mask = + cpu_to_be32(ibp->rvp.port_cap_flags); qib_send_trap(ibp, &data, sizeof(data)); } @@ -409,37 +410,38 @@ static int check_mkey(struct qib_ibport *ibp, struct ib_smp *smp, int mad_flags) int ret = 0; /* Is the mkey in the process of expiring? */ - if (ibp->mkey_lease_timeout && - time_after_eq(jiffies, ibp->mkey_lease_timeout)) { + if (ibp->rvp.mkey_lease_timeout && + time_after_eq(jiffies, ibp->rvp.mkey_lease_timeout)) { /* Clear timeout and mkey protection field. */ - ibp->mkey_lease_timeout = 0; - ibp->mkeyprot = 0; + ibp->rvp.mkey_lease_timeout = 0; + ibp->rvp.mkeyprot = 0; } - if ((mad_flags & IB_MAD_IGNORE_MKEY) || ibp->mkey == 0 || - ibp->mkey == smp->mkey) + if ((mad_flags & IB_MAD_IGNORE_MKEY) || ibp->rvp.mkey == 0 || + ibp->rvp.mkey == smp->mkey) valid_mkey = 1; /* Unset lease timeout on any valid Get/Set/TrapRepress */ - if (valid_mkey && ibp->mkey_lease_timeout && + if (valid_mkey && ibp->rvp.mkey_lease_timeout && (smp->method == IB_MGMT_METHOD_GET || smp->method == IB_MGMT_METHOD_SET || smp->method == IB_MGMT_METHOD_TRAP_REPRESS)) - ibp->mkey_lease_timeout = 0; + ibp->rvp.mkey_lease_timeout = 0; if (!valid_mkey) { switch (smp->method) { case IB_MGMT_METHOD_GET: /* Bad mkey not a violation below level 2 */ - if (ibp->mkeyprot < 2) + if (ibp->rvp.mkeyprot < 2) break; case IB_MGMT_METHOD_SET: case IB_MGMT_METHOD_TRAP_REPRESS: - if (ibp->mkey_violations != 0xFFFF) - ++ibp->mkey_violations; - if (!ibp->mkey_lease_timeout && ibp->mkey_lease_period) - ibp->mkey_lease_timeout = jiffies + - ibp->mkey_lease_period * HZ; + if (ibp->rvp.mkey_violations != 0xFFFF) + ++ibp->rvp.mkey_violations; + if (!ibp->rvp.mkey_lease_timeout && + ibp->rvp.mkey_lease_period) + ibp->rvp.mkey_lease_timeout = jiffies + + ibp->rvp.mkey_lease_period * HZ; /* Generate a trap notice. */ qib_bad_mkey(ibp, smp); ret = 1; @@ -489,15 +491,15 @@ static int subn_get_portinfo(struct ib_smp *smp, struct ib_device *ibdev, /* Only return the mkey if the protection field allows it. */ if (!(smp->method == IB_MGMT_METHOD_GET && - ibp->mkey != smp->mkey && - ibp->mkeyprot == 1)) - pip->mkey = ibp->mkey; - pip->gid_prefix = ibp->gid_prefix; + ibp->rvp.mkey != smp->mkey && + ibp->rvp.mkeyprot == 1)) + pip->mkey = ibp->rvp.mkey; + pip->gid_prefix = ibp->rvp.gid_prefix; pip->lid = cpu_to_be16(ppd->lid); - pip->sm_lid = cpu_to_be16(ibp->sm_lid); - pip->cap_mask = cpu_to_be32(ibp->port_cap_flags); + pip->sm_lid = cpu_to_be16(ibp->rvp.sm_lid); + pip->cap_mask = cpu_to_be32(ibp->rvp.port_cap_flags); /* pip->diag_code; */ - pip->mkey_lease_period = cpu_to_be16(ibp->mkey_lease_period); + pip->mkey_lease_period = cpu_to_be16(ibp->rvp.mkey_lease_period); pip->local_port_num = port; pip->link_width_enabled = ppd->link_width_enabled; pip->link_width_supported = ppd->link_width_supported; @@ -508,7 +510,7 @@ static int subn_get_portinfo(struct ib_smp *smp, struct ib_device *ibdev, pip->portphysstate_linkdown = (dd->f_ibphys_portstate(ppd->lastibcstat) << 4) | (get_linkdowndefaultstate(ppd) ? 1 : 2); - pip->mkeyprot_resv_lmc = (ibp->mkeyprot << 6) | ppd->lmc; + pip->mkeyprot_resv_lmc = (ibp->rvp.mkeyprot << 6) | ppd->lmc; pip->linkspeedactive_enabled = (ppd->link_speed_active << 4) | ppd->link_speed_enabled; switch (ppd->ibmtu) { @@ -529,9 +531,9 @@ static int subn_get_portinfo(struct ib_smp *smp, struct ib_device *ibdev, mtu = IB_MTU_256; break; } - pip->neighbormtu_mastersmsl = (mtu << 4) | ibp->sm_sl; + pip->neighbormtu_mastersmsl = (mtu << 4) | ibp->rvp.sm_sl; pip->vlcap_inittype = ppd->vls_supported << 4; /* InitType = 0 */ - pip->vl_high_limit = ibp->vl_high_limit; + pip->vl_high_limit = ibp->rvp.vl_high_limit; pip->vl_arb_high_cap = dd->f_get_ib_cfg(ppd, QIB_IB_CFG_VL_HIGH_CAP); pip->vl_arb_low_cap = @@ -542,20 +544,20 @@ static int subn_get_portinfo(struct ib_smp *smp, struct ib_device *ibdev, /* pip->vlstallcnt_hoqlife; */ pip->operationalvl_pei_peo_fpi_fpo = dd->f_get_ib_cfg(ppd, QIB_IB_CFG_OP_VLS) << 4; - pip->mkey_violations = cpu_to_be16(ibp->mkey_violations); + pip->mkey_violations = cpu_to_be16(ibp->rvp.mkey_violations); /* P_KeyViolations are counted by hardware. */ - pip->pkey_violations = cpu_to_be16(ibp->pkey_violations); - pip->qkey_violations = cpu_to_be16(ibp->qkey_violations); + pip->pkey_violations = cpu_to_be16(ibp->rvp.pkey_violations); + pip->qkey_violations = cpu_to_be16(ibp->rvp.qkey_violations); /* Only the hardware GUID is supported for now */ pip->guid_cap = QIB_GUIDS_PER_PORT; - pip->clientrereg_resv_subnetto = ibp->subnet_timeout; + pip->clientrereg_resv_subnetto = ibp->rvp.subnet_timeout; /* 32.768 usec. response time (guessing) */ pip->resv_resptimevalue = 3; pip->localphyerrors_overrunerrors = (get_phyerrthreshold(ppd) << 4) | get_overrunthreshold(ppd); /* pip->max_credit_hint; */ - if (ibp->port_cap_flags & IB_PORT_LINK_LATENCY_SUP) { + if (ibp->rvp.port_cap_flags & IB_PORT_LINK_LATENCY_SUP) { u32 v; v = dd->f_get_ib_cfg(ppd, QIB_IB_CFG_LINKLATENCY); @@ -685,9 +687,9 @@ static int subn_set_portinfo(struct ib_smp *smp, struct ib_device *ibdev, event.device = ibdev; event.element.port_num = port; - ibp->mkey = pip->mkey; - ibp->gid_prefix = pip->gid_prefix; - ibp->mkey_lease_period = be16_to_cpu(pip->mkey_lease_period); + ibp->rvp.mkey = pip->mkey; + ibp->rvp.gid_prefix = pip->gid_prefix; + ibp->rvp.mkey_lease_period = be16_to_cpu(pip->mkey_lease_period); lid = be16_to_cpu(pip->lid); /* Must be a valid unicast LID address. */ @@ -708,19 +710,19 @@ static int subn_set_portinfo(struct ib_smp *smp, struct ib_device *ibdev, /* Must be a valid unicast LID address. */ if (smlid == 0 || smlid >= be16_to_cpu(IB_MULTICAST_LID_BASE)) smp->status |= IB_SMP_INVALID_FIELD; - else if (smlid != ibp->sm_lid || msl != ibp->sm_sl) { - spin_lock_irqsave(&ibp->lock, flags); + else if (smlid != ibp->rvp.sm_lid || msl != ibp->rvp.sm_sl) { + spin_lock_irqsave(&ibp->rvp.lock, flags); if (ibp->sm_ah) { - if (smlid != ibp->sm_lid) + if (smlid != ibp->rvp.sm_lid) ibp->sm_ah->attr.dlid = smlid; - if (msl != ibp->sm_sl) + if (msl != ibp->rvp.sm_sl) ibp->sm_ah->attr.sl = msl; } - spin_unlock_irqrestore(&ibp->lock, flags); - if (smlid != ibp->sm_lid) - ibp->sm_lid = smlid; - if (msl != ibp->sm_sl) - ibp->sm_sl = msl; + spin_unlock_irqrestore(&ibp->rvp.lock, flags); + if (smlid != ibp->rvp.sm_lid) + ibp->rvp.sm_lid = smlid; + if (msl != ibp->rvp.sm_sl) + ibp->rvp.sm_sl = msl; event.event = IB_EVENT_SM_CHANGE; ib_dispatch_event(&event); } @@ -768,10 +770,10 @@ static int subn_set_portinfo(struct ib_smp *smp, struct ib_device *ibdev, smp->status |= IB_SMP_INVALID_FIELD; } - ibp->mkeyprot = pip->mkeyprot_resv_lmc >> 6; - ibp->vl_high_limit = pip->vl_high_limit; + ibp->rvp.mkeyprot = pip->mkeyprot_resv_lmc >> 6; + ibp->rvp.vl_high_limit = pip->vl_high_limit; (void) dd->f_set_ib_cfg(ppd, QIB_IB_CFG_VL_HIGH_LIMIT, - ibp->vl_high_limit); + ibp->rvp.vl_high_limit); mtu = ib_mtu_enum_to_int((pip->neighbormtu_mastersmsl >> 4) & 0xF); if (mtu == -1) @@ -789,13 +791,13 @@ static int subn_set_portinfo(struct ib_smp *smp, struct ib_device *ibdev, } if (pip->mkey_violations == 0) - ibp->mkey_violations = 0; + ibp->rvp.mkey_violations = 0; if (pip->pkey_violations == 0) - ibp->pkey_violations = 0; + ibp->rvp.pkey_violations = 0; if (pip->qkey_violations == 0) - ibp->qkey_violations = 0; + ibp->rvp.qkey_violations = 0; ore = pip->localphyerrors_overrunerrors; if (set_phyerrthreshold(ppd, (ore >> 4) & 0xF)) @@ -804,7 +806,7 @@ static int subn_set_portinfo(struct ib_smp *smp, struct ib_device *ibdev, if (set_overrunthreshold(ppd, (ore & 0xF))) smp->status |= IB_SMP_INVALID_FIELD; - ibp->subnet_timeout = pip->clientrereg_resv_subnetto & 0x1F; + ibp->rvp.subnet_timeout = pip->clientrereg_resv_subnetto & 0x1F; /* * Do the port state change now that the other link parameters @@ -1062,7 +1064,7 @@ static int subn_get_sl_to_vl(struct ib_smp *smp, struct ib_device *ibdev, memset(smp->data, 0, sizeof(smp->data)); - if (!(ibp->port_cap_flags & IB_PORT_SL_MAP_SUP)) + if (!(ibp->rvp.port_cap_flags & IB_PORT_SL_MAP_SUP)) smp->status |= IB_SMP_UNSUP_METHOD; else for (i = 0; i < ARRAY_SIZE(ibp->sl_to_vl); i += 2) @@ -1078,7 +1080,7 @@ static int subn_set_sl_to_vl(struct ib_smp *smp, struct ib_device *ibdev, u8 *p = (u8 *) smp->data; unsigned i; - if (!(ibp->port_cap_flags & IB_PORT_SL_MAP_SUP)) { + if (!(ibp->rvp.port_cap_flags & IB_PORT_SL_MAP_SUP)) { smp->status |= IB_SMP_UNSUP_METHOD; return reply(smp); } @@ -1195,20 +1197,20 @@ static int pma_get_portsamplescontrol(struct ib_pma_mad *pmp, pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; goto bail; } - spin_lock_irqsave(&ibp->lock, flags); + spin_lock_irqsave(&ibp->rvp.lock, flags); p->tick = dd->f_get_ib_cfg(ppd, QIB_IB_CFG_PMA_TICKS); p->sample_status = dd->f_portcntr(ppd, QIBPORTCNTR_PSSTAT); p->counter_width = 4; /* 32 bit counters */ p->counter_mask0_9 = COUNTER_MASK0_9; - p->sample_start = cpu_to_be32(ibp->pma_sample_start); - p->sample_interval = cpu_to_be32(ibp->pma_sample_interval); - p->tag = cpu_to_be16(ibp->pma_tag); - p->counter_select[0] = ibp->pma_counter_select[0]; - p->counter_select[1] = ibp->pma_counter_select[1]; - p->counter_select[2] = ibp->pma_counter_select[2]; - p->counter_select[3] = ibp->pma_counter_select[3]; - p->counter_select[4] = ibp->pma_counter_select[4]; - spin_unlock_irqrestore(&ibp->lock, flags); + p->sample_start = cpu_to_be32(ibp->rvp.pma_sample_start); + p->sample_interval = cpu_to_be32(ibp->rvp.pma_sample_interval); + p->tag = cpu_to_be16(ibp->rvp.pma_tag); + p->counter_select[0] = ibp->rvp.pma_counter_select[0]; + p->counter_select[1] = ibp->rvp.pma_counter_select[1]; + p->counter_select[2] = ibp->rvp.pma_counter_select[2]; + p->counter_select[3] = ibp->rvp.pma_counter_select[3]; + p->counter_select[4] = ibp->rvp.pma_counter_select[4]; + spin_unlock_irqrestore(&ibp->rvp.lock, flags); bail: return reply((struct ib_smp *) pmp); @@ -1233,7 +1235,7 @@ static int pma_set_portsamplescontrol(struct ib_pma_mad *pmp, goto bail; } - spin_lock_irqsave(&ibp->lock, flags); + spin_lock_irqsave(&ibp->rvp.lock, flags); /* Port Sampling code owns the PS* HW counters */ xmit_flags = ppd->cong_stats.flags; @@ -1242,18 +1244,18 @@ static int pma_set_portsamplescontrol(struct ib_pma_mad *pmp, if (status == IB_PMA_SAMPLE_STATUS_DONE || (status == IB_PMA_SAMPLE_STATUS_RUNNING && xmit_flags == IB_PMA_CONG_HW_CONTROL_TIMER)) { - ibp->pma_sample_start = be32_to_cpu(p->sample_start); - ibp->pma_sample_interval = be32_to_cpu(p->sample_interval); - ibp->pma_tag = be16_to_cpu(p->tag); - ibp->pma_counter_select[0] = p->counter_select[0]; - ibp->pma_counter_select[1] = p->counter_select[1]; - ibp->pma_counter_select[2] = p->counter_select[2]; - ibp->pma_counter_select[3] = p->counter_select[3]; - ibp->pma_counter_select[4] = p->counter_select[4]; - dd->f_set_cntr_sample(ppd, ibp->pma_sample_interval, - ibp->pma_sample_start); + ibp->rvp.pma_sample_start = be32_to_cpu(p->sample_start); + ibp->rvp.pma_sample_interval = be32_to_cpu(p->sample_interval); + ibp->rvp.pma_tag = be16_to_cpu(p->tag); + ibp->rvp.pma_counter_select[0] = p->counter_select[0]; + ibp->rvp.pma_counter_select[1] = p->counter_select[1]; + ibp->rvp.pma_counter_select[2] = p->counter_select[2]; + ibp->rvp.pma_counter_select[3] = p->counter_select[3]; + ibp->rvp.pma_counter_select[4] = p->counter_select[4]; + dd->f_set_cntr_sample(ppd, ibp->rvp.pma_sample_interval, + ibp->rvp.pma_sample_start); } - spin_unlock_irqrestore(&ibp->lock, flags); + spin_unlock_irqrestore(&ibp->rvp.lock, flags); ret = pma_get_portsamplescontrol(pmp, ibdev, port); @@ -1357,8 +1359,8 @@ static int pma_get_portsamplesresult(struct ib_pma_mad *pmp, int i; memset(pmp->data, 0, sizeof(pmp->data)); - spin_lock_irqsave(&ibp->lock, flags); - p->tag = cpu_to_be16(ibp->pma_tag); + spin_lock_irqsave(&ibp->rvp.lock, flags); + p->tag = cpu_to_be16(ibp->rvp.pma_tag); if (ppd->cong_stats.flags == IB_PMA_CONG_HW_CONTROL_TIMER) p->sample_status = IB_PMA_SAMPLE_STATUS_DONE; else { @@ -1373,11 +1375,11 @@ static int pma_get_portsamplesresult(struct ib_pma_mad *pmp, ppd->cong_stats.flags = IB_PMA_CONG_HW_CONTROL_TIMER; } } - for (i = 0; i < ARRAY_SIZE(ibp->pma_counter_select); i++) + for (i = 0; i < ARRAY_SIZE(ibp->rvp.pma_counter_select); i++) p->counter[i] = cpu_to_be32( get_cache_hw_sample_counters( - ppd, ibp->pma_counter_select[i])); - spin_unlock_irqrestore(&ibp->lock, flags); + ppd, ibp->rvp.pma_counter_select[i])); + spin_unlock_irqrestore(&ibp->rvp.lock, flags); return reply((struct ib_smp *) pmp); } @@ -1397,8 +1399,8 @@ static int pma_get_portsamplesresult_ext(struct ib_pma_mad *pmp, /* Port Sampling code owns the PS* HW counters */ memset(pmp->data, 0, sizeof(pmp->data)); - spin_lock_irqsave(&ibp->lock, flags); - p->tag = cpu_to_be16(ibp->pma_tag); + spin_lock_irqsave(&ibp->rvp.lock, flags); + p->tag = cpu_to_be16(ibp->rvp.pma_tag); if (ppd->cong_stats.flags == IB_PMA_CONG_HW_CONTROL_TIMER) p->sample_status = IB_PMA_SAMPLE_STATUS_DONE; else { @@ -1415,11 +1417,11 @@ static int pma_get_portsamplesresult_ext(struct ib_pma_mad *pmp, ppd->cong_stats.flags = IB_PMA_CONG_HW_CONTROL_TIMER; } } - for (i = 0; i < ARRAY_SIZE(ibp->pma_counter_select); i++) + for (i = 0; i < ARRAY_SIZE(ibp->rvp.pma_counter_select); i++) p->counter[i] = cpu_to_be64( get_cache_hw_sample_counters( - ppd, ibp->pma_counter_select[i])); - spin_unlock_irqrestore(&ibp->lock, flags); + ppd, ibp->rvp.pma_counter_select[i])); + spin_unlock_irqrestore(&ibp->rvp.lock, flags); return reply((struct ib_smp *) pmp); } @@ -1453,7 +1455,7 @@ static int pma_get_portcounters(struct ib_pma_mad *pmp, cntrs.excessive_buffer_overrun_errors -= ibp->z_excessive_buffer_overrun_errors; cntrs.vl15_dropped -= ibp->z_vl15_dropped; - cntrs.vl15_dropped += ibp->n_vl15_dropped; + cntrs.vl15_dropped += ibp->rvp.n_vl15_dropped; memset(pmp->data, 0, sizeof(pmp->data)); @@ -1546,9 +1548,9 @@ static int pma_get_portcounters_cong(struct ib_pma_mad *pmp, pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; qib_get_counters(ppd, &cntrs); - spin_lock_irqsave(&ppd->ibport_data.lock, flags); + spin_lock_irqsave(&ppd->ibport_data.rvp.lock, flags); xmit_wait_counter = xmit_wait_get_value_delta(ppd); - spin_unlock_irqrestore(&ppd->ibport_data.lock, flags); + spin_unlock_irqrestore(&ppd->ibport_data.rvp.lock, flags); /* Adjust counters for any resets done. */ cntrs.symbol_error_counter -= ibp->z_symbol_error_counter; @@ -1564,7 +1566,7 @@ static int pma_get_portcounters_cong(struct ib_pma_mad *pmp, cntrs.excessive_buffer_overrun_errors -= ibp->z_excessive_buffer_overrun_errors; cntrs.vl15_dropped -= ibp->z_vl15_dropped; - cntrs.vl15_dropped += ibp->n_vl15_dropped; + cntrs.vl15_dropped += ibp->rvp.n_vl15_dropped; cntrs.port_xmit_data -= ibp->z_port_xmit_data; cntrs.port_rcv_data -= ibp->z_port_rcv_data; cntrs.port_xmit_packets -= ibp->z_port_xmit_packets; @@ -1743,7 +1745,7 @@ static int pma_set_portcounters(struct ib_pma_mad *pmp, cntrs.excessive_buffer_overrun_errors; if (p->counter_select & IB_PMA_SEL_PORT_VL15_DROPPED) { - ibp->n_vl15_dropped = 0; + ibp->rvp.n_vl15_dropped = 0; ibp->z_vl15_dropped = cntrs.vl15_dropped; } @@ -1778,11 +1780,11 @@ static int pma_set_portcounters_cong(struct ib_pma_mad *pmp, ret = pma_get_portcounters_cong(pmp, ibdev, port); if (counter_select & IB_PMA_SEL_CONG_XMIT) { - spin_lock_irqsave(&ppd->ibport_data.lock, flags); + spin_lock_irqsave(&ppd->ibport_data.rvp.lock, flags); ppd->cong_stats.counter = 0; dd->f_set_cntr_sample(ppd, QIB_CONG_TIMER_PSINTERVAL, 0x0); - spin_unlock_irqrestore(&ppd->ibport_data.lock, flags); + spin_unlock_irqrestore(&ppd->ibport_data.rvp.lock, flags); } if (counter_select & IB_PMA_SEL_CONG_PORT_DATA) { ibp->z_port_xmit_data = cntrs.port_xmit_data; @@ -1806,7 +1808,7 @@ static int pma_set_portcounters_cong(struct ib_pma_mad *pmp, cntrs.local_link_integrity_errors; ibp->z_excessive_buffer_overrun_errors = cntrs.excessive_buffer_overrun_errors; - ibp->n_vl15_dropped = 0; + ibp->rvp.n_vl15_dropped = 0; ibp->z_vl15_dropped = cntrs.vl15_dropped; } @@ -1916,12 +1918,12 @@ static int process_subn(struct ib_device *ibdev, int mad_flags, ret = subn_get_vl_arb(smp, ibdev, port); goto bail; case IB_SMP_ATTR_SM_INFO: - if (ibp->port_cap_flags & IB_PORT_SM_DISABLED) { + if (ibp->rvp.port_cap_flags & IB_PORT_SM_DISABLED) { ret = IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED; goto bail; } - if (ibp->port_cap_flags & IB_PORT_SM) { + if (ibp->rvp.port_cap_flags & IB_PORT_SM) { ret = IB_MAD_RESULT_SUCCESS; goto bail; } @@ -1950,12 +1952,12 @@ static int process_subn(struct ib_device *ibdev, int mad_flags, ret = subn_set_vl_arb(smp, ibdev, port); goto bail; case IB_SMP_ATTR_SM_INFO: - if (ibp->port_cap_flags & IB_PORT_SM_DISABLED) { + if (ibp->rvp.port_cap_flags & IB_PORT_SM_DISABLED) { ret = IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED; goto bail; } - if (ibp->port_cap_flags & IB_PORT_SM) { + if (ibp->rvp.port_cap_flags & IB_PORT_SM) { ret = IB_MAD_RESULT_SUCCESS; goto bail; } @@ -2456,7 +2458,7 @@ static void xmit_wait_timer_func(unsigned long opaque) unsigned long flags; u8 status; - spin_lock_irqsave(&ppd->ibport_data.lock, flags); + spin_lock_irqsave(&ppd->ibport_data.rvp.lock, flags); if (ppd->cong_stats.flags == IB_PMA_CONG_HW_CONTROL_SAMPLE) { status = dd->f_portcntr(ppd, QIBPORTCNTR_PSSTAT); if (status == IB_PMA_SAMPLE_STATUS_DONE) { @@ -2469,7 +2471,7 @@ static void xmit_wait_timer_func(unsigned long opaque) ppd->cong_stats.counter = xmit_wait_get_value_delta(ppd); dd->f_set_cntr_sample(ppd, QIB_CONG_TIMER_PSINTERVAL, 0x0); done: - spin_unlock_irqrestore(&ppd->ibport_data.lock, flags); + spin_unlock_irqrestore(&ppd->ibport_data.rvp.lock, flags); mod_timer(&ppd->cong_stats.timer, jiffies + HZ); } @@ -2501,7 +2503,7 @@ int qib_create_agents(struct qib_ibdev *dev) dd->pport[p].cong_stats.timer.expires = 0; add_timer(&dd->pport[p].cong_stats.timer); - ibp->send_agent = agent; + ibp->rvp.send_agent = agent; } return 0; @@ -2509,9 +2511,9 @@ int qib_create_agents(struct qib_ibdev *dev) err: for (p = 0; p < dd->num_pports; p++) { ibp = &dd->pport[p].ibport_data; - if (ibp->send_agent) { - agent = ibp->send_agent; - ibp->send_agent = NULL; + if (ibp->rvp.send_agent) { + agent = ibp->rvp.send_agent; + ibp->rvp.send_agent = NULL; ib_unregister_mad_agent(agent); } } @@ -2528,9 +2530,9 @@ void qib_free_agents(struct qib_ibdev *dev) for (p = 0; p < dd->num_pports; p++) { ibp = &dd->pport[p].ibport_data; - if (ibp->send_agent) { - agent = ibp->send_agent; - ibp->send_agent = NULL; + if (ibp->rvp.send_agent) { + agent = ibp->rvp.send_agent; + ibp->rvp.send_agent = NULL; ib_unregister_mad_agent(agent); } if (ibp->sm_ah) { diff --git a/drivers/infiniband/hw/qib/qib_qp.c b/drivers/infiniband/hw/qib/qib_qp.c index 79907d1..b320246 100644 --- a/drivers/infiniband/hw/qib/qib_qp.c +++ b/drivers/infiniband/hw/qib/qib_qp.c @@ -230,9 +230,9 @@ static void insert_qp(struct qib_ibdev *dev, struct rvt_qp *qp) spin_lock_irqsave(&dev->qpt_lock, flags); if (qp->ibqp.qp_num == 0) - rcu_assign_pointer(ibp->qp0, qp); + rcu_assign_pointer(ibp->rvp.qp[0], qp); else if (qp->ibqp.qp_num == 1) - rcu_assign_pointer(ibp->qp1, qp); + rcu_assign_pointer(ibp->rvp.qp[1], qp); else { qp->next = dev->qp_table[n]; rcu_assign_pointer(dev->qp_table[n], qp); @@ -254,12 +254,12 @@ static void remove_qp(struct qib_ibdev *dev, struct rvt_qp *qp) spin_lock_irqsave(&dev->qpt_lock, flags); - if (rcu_dereference_protected(ibp->qp0, + if (rcu_dereference_protected(ibp->rvp.qp[0], + lockdep_is_held(&dev->qpt_lock)) == qp) { + RCU_INIT_POINTER(ibp->rvp.qp[0], NULL); + } else if (rcu_dereference_protected(ibp->rvp.qp[1], lockdep_is_held(&dev->qpt_lock)) == qp) { - RCU_INIT_POINTER(ibp->qp0, NULL); - } else if (rcu_dereference_protected(ibp->qp1, - lockdep_is_held(&dev->qpt_lock)) == qp) { - RCU_INIT_POINTER(ibp->qp1, NULL); + RCU_INIT_POINTER(ibp->rvp.qp[1], NULL); } else { struct rvt_qp *q; struct rvt_qp __rcu **qpp; @@ -305,9 +305,9 @@ unsigned qib_free_all_qps(struct qib_devdata *dd) if (!qib_mcast_tree_empty(ibp)) qp_inuse++; rcu_read_lock(); - if (rcu_dereference(ibp->qp0)) + if (rcu_dereference(ibp->rvp.qp[0])) qp_inuse++; - if (rcu_dereference(ibp->qp1)) + if (rcu_dereference(ibp->rvp.qp[1])) qp_inuse++; rcu_read_unlock(); } @@ -343,9 +343,9 @@ struct rvt_qp *qib_lookup_qpn(struct qib_ibport *ibp, u32 qpn) rcu_read_lock(); if (unlikely(qpn <= 1)) { if (qpn == 0) - qp = rcu_dereference(ibp->qp0); + qp = rcu_dereference(ibp->rvp.qp[0]); else - qp = rcu_dereference(ibp->qp1); + qp = rcu_dereference(ibp->rvp.qp[1]); if (qp) atomic_inc(&qp->refcount); } else { diff --git a/drivers/infiniband/hw/qib/qib_rc.c b/drivers/infiniband/hw/qib/qib_rc.c index 46e6c97..1e8463d 100644 --- a/drivers/infiniband/hw/qib/qib_rc.c +++ b/drivers/infiniband/hw/qib/qib_rc.c @@ -760,7 +760,7 @@ void qib_send_rc_ack(struct rvt_qp *qp) queue_ack: if (ib_qib_state_ops[qp->state] & QIB_PROCESS_RECV_OK) { - ibp->n_rc_qacks++; + this_cpu_inc(*ibp->rvp.rc_qacks); qp->s_flags |= QIB_S_ACK_PENDING | QIB_S_RESP_PENDING; qp->s_nak_state = qp->r_nak_state; qp->s_ack_psn = qp->r_ack_psn; @@ -888,9 +888,9 @@ static void qib_restart_rc(struct rvt_qp *qp, u32 psn, int wait) ibp = to_iport(qp->ibqp.device, qp->port_num); if (wqe->wr.opcode == IB_WR_RDMA_READ) - ibp->n_rc_resends++; + ibp->rvp.n_rc_resends++; else - ibp->n_rc_resends += (qp->s_psn - psn) & QIB_PSN_MASK; + ibp->rvp.n_rc_resends += (qp->s_psn - psn) & QIB_PSN_MASK; qp->s_flags &= ~(QIB_S_WAIT_FENCE | QIB_S_WAIT_RDMAR | QIB_S_WAIT_SSN_CREDIT | QIB_S_WAIT_PSN | @@ -913,7 +913,7 @@ static void rc_timeout(unsigned long arg) spin_lock(&qp->s_lock); if (qp->s_flags & QIB_S_TIMER) { ibp = to_iport(qp->ibqp.device, qp->port_num); - ibp->n_rc_timeouts++; + ibp->rvp.n_rc_timeouts++; qp->s_flags &= ~QIB_S_TIMER; del_timer(&qp->s_timer); qib_restart_rc(qp, qp->s_last_psn + 1, 1); @@ -1087,7 +1087,7 @@ static struct rvt_swqe *do_rc_completion(struct rvt_qp *qp, if (++qp->s_last >= qp->s_size) qp->s_last = 0; } else - ibp->n_rc_delayed_comp++; + this_cpu_inc(*ibp->rvp.rc_delayed_comp); qp->s_retry = qp->s_retry_cnt; update_last_psn(qp, wqe->lpsn); @@ -1232,7 +1232,7 @@ static int do_rc_ack(struct rvt_qp *qp, u32 aeth, u32 psn, int opcode, switch (aeth >> 29) { case 0: /* ACK */ - ibp->n_rc_acks++; + this_cpu_inc(*ibp->rvp.rc_acks); if (qp->s_acked != qp->s_tail) { /* * We are expecting more ACKs so @@ -1261,7 +1261,7 @@ static int do_rc_ack(struct rvt_qp *qp, u32 aeth, u32 psn, int opcode, goto bail; case 1: /* RNR NAK */ - ibp->n_rnr_naks++; + ibp->rvp.n_rnr_naks++; if (qp->s_acked == qp->s_tail) goto bail; if (qp->s_flags & QIB_S_WAIT_RNR) @@ -1276,7 +1276,7 @@ static int do_rc_ack(struct rvt_qp *qp, u32 aeth, u32 psn, int opcode, /* The last valid PSN is the previous PSN. */ update_last_psn(qp, psn - 1); - ibp->n_rc_resends += (qp->s_psn - psn) & QIB_PSN_MASK; + ibp->rvp.n_rc_resends += (qp->s_psn - psn) & QIB_PSN_MASK; reset_psn(qp, psn); @@ -1297,7 +1297,7 @@ static int do_rc_ack(struct rvt_qp *qp, u32 aeth, u32 psn, int opcode, switch ((aeth >> QIB_AETH_CREDIT_SHIFT) & QIB_AETH_CREDIT_MASK) { case 0: /* PSN sequence error */ - ibp->n_seq_naks++; + ibp->rvp.n_seq_naks++; /* * Back up to the responder's expected PSN. * Note that we might get a NAK in the middle of an @@ -1310,17 +1310,17 @@ static int do_rc_ack(struct rvt_qp *qp, u32 aeth, u32 psn, int opcode, case 1: /* Invalid Request */ status = IB_WC_REM_INV_REQ_ERR; - ibp->n_other_naks++; + ibp->rvp.n_other_naks++; goto class_b; case 2: /* Remote Access Error */ status = IB_WC_REM_ACCESS_ERR; - ibp->n_other_naks++; + ibp->rvp.n_other_naks++; goto class_b; case 3: /* Remote Operation Error */ status = IB_WC_REM_OP_ERR; - ibp->n_other_naks++; + ibp->rvp.n_other_naks++; class_b: if (qp->s_last == qp->s_acked) { qib_send_complete(qp, wqe, status); @@ -1371,7 +1371,7 @@ static void rdma_seq_err(struct rvt_qp *qp, struct qib_ibport *ibp, u32 psn, wqe = do_rc_completion(qp, wqe, ibp); } - ibp->n_rdma_seq++; + ibp->rvp.n_rdma_seq++; qp->r_flags |= QIB_R_RDMAR_SEQ; qib_restart_rc(qp, qp->s_last_psn + 1, 0); if (list_empty(&qp->rspwait)) { @@ -1643,7 +1643,7 @@ static int qib_rc_rcv_error(struct qib_other_headers *ohdr, * Don't queue the NAK if we already sent one. */ if (!qp->r_nak_state) { - ibp->n_rc_seqnak++; + ibp->rvp.n_rc_seqnak++; qp->r_nak_state = IB_NAK_PSN_ERROR; /* Use the expected PSN. */ qp->r_ack_psn = qp->r_psn; @@ -1679,7 +1679,7 @@ static int qib_rc_rcv_error(struct qib_other_headers *ohdr, */ e = NULL; old_req = 1; - ibp->n_rc_dupreq++; + ibp->rvp.n_rc_dupreq++; spin_lock_irqsave(&qp->s_lock, flags); diff --git a/drivers/infiniband/hw/qib/qib_ruc.c b/drivers/infiniband/hw/qib/qib_ruc.c index 682447e..6290979 100644 --- a/drivers/infiniband/hw/qib/qib_ruc.c +++ b/drivers/infiniband/hw/qib/qib_ruc.c @@ -279,7 +279,8 @@ int qib_ruc_check_hdr(struct qib_ibport *ibp, struct qib_ib_header *hdr, if (!(qp->alt_ah_attr.ah_flags & IB_AH_GRH)) goto err; guid = get_sguid(ibp, qp->alt_ah_attr.grh.sgid_index); - if (!gid_ok(&hdr->u.l.grh.dgid, ibp->gid_prefix, guid)) + if (!gid_ok(&hdr->u.l.grh.dgid, + ibp->rvp.gid_prefix, guid)) goto err; if (!gid_ok(&hdr->u.l.grh.sgid, qp->alt_ah_attr.grh.dgid.global.subnet_prefix, @@ -311,7 +312,8 @@ int qib_ruc_check_hdr(struct qib_ibport *ibp, struct qib_ib_header *hdr, goto err; guid = get_sguid(ibp, qp->remote_ah_attr.grh.sgid_index); - if (!gid_ok(&hdr->u.l.grh.dgid, ibp->gid_prefix, guid)) + if (!gid_ok(&hdr->u.l.grh.dgid, + ibp->rvp.gid_prefix, guid)) goto err; if (!gid_ok(&hdr->u.l.grh.sgid, qp->remote_ah_attr.grh.dgid.global.subnet_prefix, @@ -409,7 +411,7 @@ again: if (!qp || !(ib_qib_state_ops[qp->state] & QIB_PROCESS_RECV_OK) || qp->ibqp.qp_type != sqp->ibqp.qp_type) { - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; /* * For RC, the requester would timeout and retry so * shortcut the timeouts and just signal too many retries. @@ -566,7 +568,7 @@ again: send_comp: spin_lock_irqsave(&sqp->s_lock, flags); - ibp->n_loop_pkts++; + ibp->rvp.n_loop_pkts++; flush_send: sqp->s_rnr_retry = sqp->s_rnr_retry_cnt; qib_send_complete(sqp, wqe, send_status); @@ -576,7 +578,7 @@ rnr_nak: /* Handle RNR NAK */ if (qp->ibqp.qp_type == IB_QPT_UC) goto send_comp; - ibp->n_rnr_naks++; + ibp->rvp.n_rnr_naks++; /* * Note: we don't need the s_lock held since the BUSY flag * makes this single threaded. @@ -663,7 +665,7 @@ u32 qib_make_grh(struct qib_ibport *ibp, struct ib_grh *hdr, hdr->next_hdr = IB_GRH_NEXT_HDR; hdr->hop_limit = grh->hop_limit; /* The SGID is 32-bit aligned. */ - hdr->sgid.global.subnet_prefix = ibp->gid_prefix; + hdr->sgid.global.subnet_prefix = ibp->rvp.gid_prefix; hdr->sgid.global.interface_id = grh->sgid_index ? ibp->guids[grh->sgid_index - 1] : ppd_from_ibp(ibp)->guid; hdr->dgid = grh->dgid; diff --git a/drivers/infiniband/hw/qib/qib_sdma.c b/drivers/infiniband/hw/qib/qib_sdma.c index 1395ed0..9d1104e 100644 --- a/drivers/infiniband/hw/qib/qib_sdma.c +++ b/drivers/infiniband/hw/qib/qib_sdma.c @@ -702,7 +702,7 @@ busy: struct qib_ibport *ibp; ibp = &ppd->ibport_data; - ibp->n_dmawait++; + ibp->rvp.n_dmawait++; qp->s_flags |= QIB_S_WAIT_DMA_DESC; list_add_tail(&priv->iowait, &dev->dmawait); } diff --git a/drivers/infiniband/hw/qib/qib_sysfs.c b/drivers/infiniband/hw/qib/qib_sysfs.c index 72a160e..fe4cf5e 100644 --- a/drivers/infiniband/hw/qib/qib_sysfs.c +++ b/drivers/infiniband/hw/qib/qib_sysfs.c @@ -406,7 +406,13 @@ static struct kobj_type qib_sl2vl_ktype = { #define QIB_DIAGC_ATTR(N) \ static struct qib_diagc_attr qib_diagc_attr_##N = { \ .attr = { .name = __stringify(N), .mode = 0664 }, \ - .counter = offsetof(struct qib_ibport, n_##N) \ + .counter = offsetof(struct qib_ibport, rvp.n_##N) \ + } + +#define QIB_DIAGC_ATTR_PER_CPU(N) \ + static struct qib_diagc_attr qib_diagc_attr_##N = { \ + .attr = { .name = __stringify(N), .mode = 0664 }, \ + .counter = offsetof(struct qib_ibport, rvp.z_##N) \ } struct qib_diagc_attr { @@ -414,10 +420,11 @@ struct qib_diagc_attr { size_t counter; }; +QIB_DIAGC_ATTR_PER_CPU(rc_acks); +QIB_DIAGC_ATTR_PER_CPU(rc_qacks); +QIB_DIAGC_ATTR_PER_CPU(rc_delayed_comp); + QIB_DIAGC_ATTR(rc_resends); -QIB_DIAGC_ATTR(rc_acks); -QIB_DIAGC_ATTR(rc_qacks); -QIB_DIAGC_ATTR(rc_delayed_comp); QIB_DIAGC_ATTR(seq_naks); QIB_DIAGC_ATTR(rdma_seq); QIB_DIAGC_ATTR(rnr_naks); @@ -449,6 +456,35 @@ static struct attribute *diagc_default_attributes[] = { NULL }; +static u64 get_all_cpu_total(u64 __percpu *cntr) +{ + int cpu; + u64 counter = 0; + + for_each_possible_cpu(cpu) + counter += *per_cpu_ptr(cntr, cpu); + return counter; +} + +#define def_write_per_cpu(cntr) \ +static void write_per_cpu_##cntr(struct qib_pportdata *ppd, u32 data) \ +{ \ + struct qib_devdata *dd = ppd->dd; \ + struct qib_ibport *qibp = &ppd->ibport_data; \ + /* A write can only zero the counter */ \ + if (data == 0) \ + qibp->rvp.z_##cntr = get_all_cpu_total(qibp->rvp.cntr); \ + else \ + qib_dev_err(dd, "Per CPU cntrs can only be zeroed"); \ +} + +def_write_per_cpu(rc_acks) +def_write_per_cpu(rc_qacks) +def_write_per_cpu(rc_delayed_comp) + +#define READ_PER_CPU_CNTR(cntr) (get_all_cpu_total(qibp->rvp.cntr) - \ + qibp->rvp.z_##cntr) + static ssize_t diagc_attr_show(struct kobject *kobj, struct attribute *attr, char *buf) { @@ -458,7 +494,16 @@ static ssize_t diagc_attr_show(struct kobject *kobj, struct attribute *attr, container_of(kobj, struct qib_pportdata, diagc_kobj); struct qib_ibport *qibp = &ppd->ibport_data; - return sprintf(buf, "%u\n", *(u32 *)((char *)qibp + dattr->counter)); + if (!strncmp(dattr->attr.name, "rc_acks", 7)) + return sprintf(buf, "%llu\n", READ_PER_CPU_CNTR(rc_acks)); + else if (!strncmp(dattr->attr.name, "rc_qacks", 8)) + return sprintf(buf, "%llu\n", READ_PER_CPU_CNTR(rc_qacks)); + else if (!strncmp(dattr->attr.name, "rc_delayed_comp", 15)) + return sprintf(buf, "%llu\n", + READ_PER_CPU_CNTR(rc_delayed_comp)); + else + return sprintf(buf, "%u\n", + *(u32 *)((char *)qibp + dattr->counter)); } static ssize_t diagc_attr_store(struct kobject *kobj, struct attribute *attr, @@ -475,7 +520,15 @@ static ssize_t diagc_attr_store(struct kobject *kobj, struct attribute *attr, ret = kstrtou32(buf, 0, &val); if (ret) return ret; - *(u32 *)((char *) qibp + dattr->counter) = val; + + if (!strncmp(dattr->attr.name, "rc_acks", 7)) + write_per_cpu_rc_acks(ppd, val); + else if (!strncmp(dattr->attr.name, "rc_qacks", 8)) + write_per_cpu_rc_qacks(ppd, val); + else if (!strncmp(dattr->attr.name, "rc_delayed_comp", 15)) + write_per_cpu_rc_delayed_comp(ppd, val); + else + *(u32 *)((char *)qibp + dattr->counter) = val; return size; } diff --git a/drivers/infiniband/hw/qib/qib_uc.c b/drivers/infiniband/hw/qib/qib_uc.c index 1ae135a..659ac51 100644 --- a/drivers/infiniband/hw/qib/qib_uc.c +++ b/drivers/infiniband/hw/qib/qib_uc.c @@ -527,7 +527,7 @@ rewind: set_bit(QIB_R_REWIND_SGE, &qp->r_aflags); qp->r_sge.num_sge = 0; drop: - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; return; op_err: diff --git a/drivers/infiniband/hw/qib/qib_ud.c b/drivers/infiniband/hw/qib/qib_ud.c index 6dc20ca..d84872d 100644 --- a/drivers/infiniband/hw/qib/qib_ud.c +++ b/drivers/infiniband/hw/qib/qib_ud.c @@ -62,7 +62,7 @@ static void qib_ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe) qp = qib_lookup_qpn(ibp, swqe->ud_wr.remote_qpn); if (!qp) { - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; return; } @@ -73,7 +73,7 @@ static void qib_ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe) if (dqptype != sqptype || !(ib_qib_state_ops[qp->state] & QIB_PROCESS_RECV_OK)) { - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; goto drop; } @@ -153,14 +153,14 @@ static void qib_ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe) } if (!ret) { if (qp->ibqp.qp_num == 0) - ibp->n_vl15_dropped++; + ibp->rvp.n_vl15_dropped++; goto bail_unlock; } } /* Silently drop packets which are too big. */ if (unlikely(wc.byte_len > qp->r_len)) { qp->r_flags |= QIB_R_REUSE_SGE; - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; goto bail_unlock; } @@ -219,7 +219,7 @@ static void qib_ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe) /* Signal completion event if the solicited bit is set. */ qib_cq_enter(to_icq(qp->ibqp.recv_cq), &wc, swqe->wr.send_flags & IB_SEND_SOLICITED); - ibp->n_loop_pkts++; + ibp->rvp.n_loop_pkts++; bail_unlock: spin_unlock_irqrestore(&qp->r_lock, flags); drop: @@ -546,7 +546,7 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, } if (!ret) { if (qp->ibqp.qp_num == 0) - ibp->n_vl15_dropped++; + ibp->rvp.n_vl15_dropped++; return; } } @@ -589,5 +589,5 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, return; drop: - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; } diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c index a7629b8..fffff70 100644 --- a/drivers/infiniband/hw/qib/qib_verbs.c +++ b/drivers/infiniband/hw/qib/qib_verbs.c @@ -581,7 +581,7 @@ static void qib_qp_rcv(struct qib_ctxtdata *rcd, struct qib_ib_header *hdr, /* Check for valid receive state. */ if (!(ib_qib_state_ops[qp->state] & QIB_PROCESS_RECV_OK)) { - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; goto unlock; } @@ -711,7 +711,7 @@ void qib_ib_rcv(struct qib_ctxtdata *rcd, void *rhdr, void *data, u32 tlen) return; drop: - ibp->n_pkt_drops++; + ibp->rvp.n_pkt_drops++; } /* @@ -1251,7 +1251,7 @@ err_tx: qib_put_txreq(tx); ret = wait_kmem(dev, qp); unaligned: - ibp->n_unaligned++; + ibp->rvp.n_unaligned++; bail: return ret; bail_tx: @@ -1642,16 +1642,16 @@ static int qib_query_port(struct ib_device *ibdev, u8 port, memset(props, 0, sizeof(*props)); props->lid = lid ? lid : be16_to_cpu(IB_LID_PERMISSIVE); props->lmc = ppd->lmc; - props->sm_lid = ibp->sm_lid; - props->sm_sl = ibp->sm_sl; + props->sm_lid = ibp->rvp.sm_lid; + props->sm_sl = ibp->rvp.sm_sl; props->state = dd->f_iblink_state(ppd->lastibcstat); props->phys_state = dd->f_ibphys_portstate(ppd->lastibcstat); - props->port_cap_flags = ibp->port_cap_flags; + props->port_cap_flags = ibp->rvp.port_cap_flags; props->gid_tbl_len = QIB_GUIDS_PER_PORT; props->max_msg_sz = 0x80000000; props->pkey_tbl_len = qib_get_npkeys(dd); - props->bad_pkey_cntr = ibp->pkey_violations; - props->qkey_viol_cntr = ibp->qkey_violations; + props->bad_pkey_cntr = ibp->rvp.pkey_violations; + props->qkey_viol_cntr = ibp->rvp.qkey_violations; props->active_width = ppd->link_width_active; /* See rate_show() */ props->active_speed = ppd->link_speed_active; @@ -1679,7 +1679,7 @@ static int qib_query_port(struct ib_device *ibdev, u8 port, mtu = IB_MTU_2048; } props->active_mtu = mtu; - props->subnet_timeout = ibp->subnet_timeout; + props->subnet_timeout = ibp->rvp.subnet_timeout; return 0; } @@ -1729,14 +1729,14 @@ static int qib_modify_port(struct ib_device *ibdev, u8 port, struct qib_ibport *ibp = to_iport(ibdev, port); struct qib_pportdata *ppd = ppd_from_ibp(ibp); - ibp->port_cap_flags |= props->set_port_cap_mask; - ibp->port_cap_flags &= ~props->clr_port_cap_mask; + ibp->rvp.port_cap_flags |= props->set_port_cap_mask; + ibp->rvp.port_cap_flags &= ~props->clr_port_cap_mask; if (props->set_port_cap_mask || props->clr_port_cap_mask) qib_cap_mask_chg(ibp); if (port_modify_mask & IB_PORT_SHUTDOWN) qib_set_linkstate(ppd, QIB_IB_LINKDOWN); if (port_modify_mask & IB_PORT_RESET_QKEY_CNTR) - ibp->qkey_violations = 0; + ibp->rvp.qkey_violations = 0; return 0; } @@ -1752,7 +1752,7 @@ static int qib_query_gid(struct ib_device *ibdev, u8 port, struct qib_ibport *ibp = to_iport(ibdev, port); struct qib_pportdata *ppd = ppd_from_ibp(ibp); - gid->global.subnet_prefix = ibp->gid_prefix; + gid->global.subnet_prefix = ibp->rvp.gid_prefix; if (index == 0) gid->global.interface_id = ppd->guid; else if (index < QIB_GUIDS_PER_PORT) @@ -1782,7 +1782,7 @@ struct ib_ah *qib_create_qp0_ah(struct qib_ibport *ibp, u16 dlid) attr.dlid = dlid; attr.port_num = ppd_from_ibp(ibp)->port; rcu_read_lock(); - qp0 = rcu_dereference(ibp->qp0); + qp0 = rcu_dereference(ibp->rvp.qp[0]); if (qp0) ah = ib_create_ah(qp0->ibqp.pd, &attr); rcu_read_unlock(); @@ -1871,22 +1871,22 @@ static void init_ibport(struct qib_pportdata *ppd) struct qib_verbs_counters cntrs; struct qib_ibport *ibp = &ppd->ibport_data; - spin_lock_init(&ibp->lock); + spin_lock_init(&ibp->rvp.lock); /* Set the prefix to the default value (see ch. 4.1.1) */ - ibp->gid_prefix = IB_DEFAULT_GID_PREFIX; - ibp->sm_lid = be16_to_cpu(IB_LID_PERMISSIVE); - ibp->port_cap_flags = IB_PORT_SYS_IMAGE_GUID_SUP | + ibp->rvp.gid_prefix = IB_DEFAULT_GID_PREFIX; + ibp->rvp.sm_lid = be16_to_cpu(IB_LID_PERMISSIVE); + ibp->rvp.port_cap_flags = IB_PORT_SYS_IMAGE_GUID_SUP | IB_PORT_CLIENT_REG_SUP | IB_PORT_SL_MAP_SUP | IB_PORT_TRAP_SUP | IB_PORT_AUTO_MIGR_SUP | IB_PORT_DR_NOTICE_SUP | IB_PORT_CAP_MASK_NOTICE_SUP | IB_PORT_OTHER_LOCAL_CHANGES_SUP; if (ppd->dd->flags & QIB_HAS_LINK_LATENCY) - ibp->port_cap_flags |= IB_PORT_LINK_LATENCY_SUP; - ibp->pma_counter_select[0] = IB_PMA_PORT_XMIT_DATA; - ibp->pma_counter_select[1] = IB_PMA_PORT_RCV_DATA; - ibp->pma_counter_select[2] = IB_PMA_PORT_XMIT_PKTS; - ibp->pma_counter_select[3] = IB_PMA_PORT_RCV_PKTS; - ibp->pma_counter_select[4] = IB_PMA_PORT_XMIT_WAIT; + ibp->rvp.port_cap_flags |= IB_PORT_LINK_LATENCY_SUP; + ibp->rvp.pma_counter_select[0] = IB_PMA_PORT_XMIT_DATA; + ibp->rvp.pma_counter_select[1] = IB_PMA_PORT_RCV_DATA; + ibp->rvp.pma_counter_select[2] = IB_PMA_PORT_XMIT_PKTS; + ibp->rvp.pma_counter_select[3] = IB_PMA_PORT_RCV_PKTS; + ibp->rvp.pma_counter_select[4] = IB_PMA_PORT_XMIT_WAIT; /* Snapshot current HW counters to "clear" them. */ qib_get_counters(ppd, &cntrs); @@ -1906,8 +1906,8 @@ static void init_ibport(struct qib_pportdata *ppd) ibp->z_excessive_buffer_overrun_errors = cntrs.excessive_buffer_overrun_errors; ibp->z_vl15_dropped = cntrs.vl15_dropped; - RCU_INIT_POINTER(ibp->qp0, NULL); - RCU_INIT_POINTER(ibp->qp1, NULL); + RCU_INIT_POINTER(ibp->rvp.qp[0], NULL); + RCU_INIT_POINTER(ibp->rvp.qp[1], NULL); } static int qib_port_immutable(struct ib_device *ibdev, u8 port_num, diff --git a/drivers/infiniband/hw/qib/qib_verbs.h b/drivers/infiniband/hw/qib/qib_verbs.h index 00dd2ad..538d3a6 100644 --- a/drivers/infiniband/hw/qib/qib_verbs.h +++ b/drivers/infiniband/hw/qib/qib_verbs.h @@ -401,21 +401,10 @@ struct qib_pma_counters { }; struct qib_ibport { - struct rvt_qp __rcu *qp0; - struct rvt_qp __rcu *qp1; - struct ib_mad_agent *send_agent; /* agent for SMI (traps) */ + struct rvt_ibport rvp; struct rvt_ah *sm_ah; struct rvt_ah *smi_ah; - struct rb_root mcast_tree; - spinlock_t lock; /* protect changes in this struct */ - - /* non-zero when timer is set */ - unsigned long mkey_lease_timeout; - unsigned long trap_timeout; - __be64 gid_prefix; /* in network order */ - __be64 mkey; __be64 guids[QIB_GUIDS_PER_PORT - 1]; /* writable GUIDs */ - u64 tid; /* TID for traps */ struct qib_pma_counters __percpu *pmastats; u64 z_unicast_xmit; /* starting count for PMA */ u64 z_unicast_rcv; /* starting count for PMA */ @@ -434,42 +423,9 @@ struct qib_ibport { u32 z_local_link_integrity_errors; /* starting count for PMA */ u32 z_excessive_buffer_overrun_errors; /* starting count for PMA */ u32 z_vl15_dropped; /* starting count for PMA */ - u32 n_rc_resends; - u32 n_rc_acks; - u32 n_rc_qacks; - u32 n_rc_delayed_comp; - u32 n_seq_naks; - u32 n_rdma_seq; - u32 n_rnr_naks; - u32 n_other_naks; - u32 n_loop_pkts; - u32 n_pkt_drops; - u32 n_vl15_dropped; - u32 n_rc_timeouts; - u32 n_dmawait; - u32 n_unaligned; - u32 n_rc_dupreq; - u32 n_rc_seqnak; - u32 port_cap_flags; - u32 pma_sample_start; - u32 pma_sample_interval; - __be16 pma_counter_select[5]; - u16 pma_tag; - u16 pkey_violations; - u16 qkey_violations; - u16 mkey_violations; - u16 mkey_lease_period; - u16 sm_lid; - u16 repress_traps; - u8 sm_sl; - u8 mkeyprot; - u8 subnet_timeout; - u8 vl_high_limit; u8 sl_to_vl[16]; - }; - struct qib_ibdev { struct rvt_dev_info rdi; struct list_head pending_mmaps; diff --git a/drivers/infiniband/hw/qib/qib_verbs_mcast.c b/drivers/infiniband/hw/qib/qib_verbs_mcast.c index 73b487b..6ab3aee 100644 --- a/drivers/infiniband/hw/qib/qib_verbs_mcast.c +++ b/drivers/infiniband/hw/qib/qib_verbs_mcast.c @@ -114,8 +114,8 @@ struct qib_mcast *qib_mcast_find(struct qib_ibport *ibp, union ib_gid *mgid) unsigned long flags; struct qib_mcast *mcast; - spin_lock_irqsave(&ibp->lock, flags); - n = ibp->mcast_tree.rb_node; + spin_lock_irqsave(&ibp->rvp.lock, flags); + n = ibp->rvp.mcast_tree.rb_node; while (n) { int ret; @@ -129,11 +129,11 @@ struct qib_mcast *qib_mcast_find(struct qib_ibport *ibp, union ib_gid *mgid) n = n->rb_right; else { atomic_inc(&mcast->refcount); - spin_unlock_irqrestore(&ibp->lock, flags); + spin_unlock_irqrestore(&ibp->rvp.lock, flags); goto bail; } } - spin_unlock_irqrestore(&ibp->lock, flags); + spin_unlock_irqrestore(&ibp->rvp.lock, flags); mcast = NULL; @@ -153,11 +153,11 @@ bail: static int qib_mcast_add(struct qib_ibdev *dev, struct qib_ibport *ibp, struct qib_mcast *mcast, struct qib_mcast_qp *mqp) { - struct rb_node **n = &ibp->mcast_tree.rb_node; + struct rb_node **n = &ibp->rvp.mcast_tree.rb_node; struct rb_node *pn = NULL; int ret; - spin_lock_irq(&ibp->lock); + spin_lock_irq(&ibp->rvp.lock); while (*n) { struct qib_mcast *tmcast; @@ -212,12 +212,12 @@ static int qib_mcast_add(struct qib_ibdev *dev, struct qib_ibport *ibp, atomic_inc(&mcast->refcount); rb_link_node(&mcast->rb_node, pn, n); - rb_insert_color(&mcast->rb_node, &ibp->mcast_tree); + rb_insert_color(&mcast->rb_node, &ibp->rvp.mcast_tree); ret = 0; bail: - spin_unlock_irq(&ibp->lock); + spin_unlock_irq(&ibp->rvp.lock); return ret; } @@ -296,13 +296,13 @@ int qib_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid) goto bail; } - spin_lock_irq(&ibp->lock); + spin_lock_irq(&ibp->rvp.lock); /* Find the GID in the mcast table. */ - n = ibp->mcast_tree.rb_node; + n = ibp->rvp.mcast_tree.rb_node; while (1) { if (n == NULL) { - spin_unlock_irq(&ibp->lock); + spin_unlock_irq(&ibp->rvp.lock); ret = -EINVAL; goto bail; } @@ -331,13 +331,13 @@ int qib_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid) /* If this was the last attached QP, remove the GID too. */ if (list_empty(&mcast->qp_list)) { - rb_erase(&mcast->rb_node, &ibp->mcast_tree); + rb_erase(&mcast->rb_node, &ibp->rvp.mcast_tree); last = 1; } break; } - spin_unlock_irq(&ibp->lock); + spin_unlock_irq(&ibp->rvp.lock); if (p) { /* @@ -364,5 +364,5 @@ bail: int qib_mcast_tree_empty(struct qib_ibport *ibp) { - return ibp->mcast_tree.rb_node == NULL; + return !(ibp->rvp.mcast_tree.rb_node); }