From patchwork Tue Sep 14 06:04:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Xiao Yang X-Patchwork-Id: 12491767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.6 required=3.0 tests=BAYES_00, CHARSET_FARAWAY_HEADER,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DE3AC433F5 for ; Tue, 14 Sep 2021 06:04:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4057960F6D for ; Tue, 14 Sep 2021 06:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239906AbhINGFt (ORCPT ); Tue, 14 Sep 2021 02:05:49 -0400 Received: from esa16.fujitsucc.c3s2.iphmx.com ([216.71.158.33]:29430 "EHLO esa16.fujitsucc.c3s2.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239916AbhINGFr (ORCPT ); Tue, 14 Sep 2021 02:05:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=fujitsu.com; i=@fujitsu.com; q=dns/txt; s=fj1; t=1631599470; x=1663135470; h=from:to:subject:date:message-id:references:in-reply-to: content-transfer-encoding:mime-version; bh=yrp8ACuZPn4Si8IaoRvphq28Rm0l+ahYWnXF4VV2aUs=; b=towd3Qy/v5SasJcARgbXlmqdjshlJ5vmd8mCNncY11QsWReWKoU5JFpJ E3CnhmHzs2imsmCVJpz/VUcIqtWltJVt0U2GKkuJOCsMDeUhpX2YWf+a3 LEhBH9wOuMPBFsK1LGAHkiwuhV0LYjpMJjtxcs9maRZWElXad2LLFcvtw p+pnoozSjdC4sGaylTJBuBN53w6PCU8iwesB8wmOxL6A7qqYvMp99gY3u Jk7NUHKKbRY4u6E3DUzhQhLtmYpjwTFhlqyRbZEidPc8y3GBEwIu/F+Jc vTtr44a2K4aaJpVVXYwOZOY595xot7KLsKSxqBU629dV1EzOVfy8LGkZt A==; X-IronPort-AV: E=McAfee;i="6200,9189,10106"; a="39073787" X-IronPort-AV: E=Sophos;i="5.85,291,1624287600"; d="scan'208";a="39073787" Received: from mail-os2jpn01lp2057.outbound.protection.outlook.com (HELO JPN01-OS2-obe.outbound.protection.outlook.com) ([104.47.92.57]) by ob1.fujitsucc.c3s2.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2021 15:04:27 +0900 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fNO62l1MdqeZs1l1PheMh2RvNc332gfHIC9jSjhVasu4dyLwV2YG5m4qufkDvxS2z3uujEpconqQ3NHAqjYU37gVRaMjKPQuIV/tDC8UROEPZlIX6mtD80X15p9nUifMn6j4enTrLTPN1KRrhFQ+Kag/Av3i+20tjMROqBoy2haBfCd7cxWHsWd18lddsJtuXrVahTKY2+uncGHvwyHDu8ydh00aeOdEI/u8wty2FzbGjvVhFNkj48/s1yajN2ziHIKE0xFOgWdWYRwbk6TUylK7OuNNfKNLFoGz0WfFKZkRx4d43OAupJNwlx/vJWWmZBo2bhESSkZ0jRBWffTDTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=yrp8ACuZPn4Si8IaoRvphq28Rm0l+ahYWnXF4VV2aUs=; b=QkMe4SW3JTi+5w254Lns4EIXNgyhYgGjQ2VK8cgzcpeQV4+p7fEie8c2B8mf5e3pFgn/rcvv1PLWRd2Aqa/NrZgfSWyYqMBhIuqRIO4f84YDXzyLIhGNQsz6JZRwpjoI26JEjBuNMtD37+aRZLTEna1GwkdfgZhSDKEgS/fR75LB2nwEmvLw3bd6iswKO8HJWbB1sV4YqyQbRNiQXp+tXE4pLE6rTAigvrkuTB1zbwm2Dlwz13THapXENipS0WTNlhXdYXda8cE8XERHVjQ11TMyl6ebLRP+z3/T/qY9hB0lCFuDXtyH59+ABV4C/S9ZmiueY73a3TSWf37TE2jU+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=fujitsu.com; dmarc=pass action=none header.from=fujitsu.com; dkim=pass header.d=fujitsu.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fujitsu.onmicrosoft.com; s=selector2-fujitsu-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yrp8ACuZPn4Si8IaoRvphq28Rm0l+ahYWnXF4VV2aUs=; b=nXz7rx4j3o0Q/z44kvUjhCQFqFGDDKzcA65BixHskgqKNNGtGW4fw3Sv7XVzHP+etee2Mj0fxBpquZ8ixDBy47nseAiH6elJf0XycppaXZCqqTDYOM7jakenZuO78gkXr/NxxA8dDcOwhCkBe7FNuDcK1dG62wvqj4ruGjjGOwQ= Received: from OS0PR01MB6371.jpnprd01.prod.outlook.com (2603:1096:604:104::9) by OSAPR01MB2547.jpnprd01.prod.outlook.com (2603:1096:603:3d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.18; Tue, 14 Sep 2021 06:04:18 +0000 Received: from OS0PR01MB6371.jpnprd01.prod.outlook.com ([fe80::8d8:e27f:4b5e:fa27]) by OS0PR01MB6371.jpnprd01.prod.outlook.com ([fe80::8d8:e27f:4b5e:fa27%5]) with mapi id 15.20.4500.019; Tue, 14 Sep 2021 06:04:18 +0000 From: "yangx.jy@fujitsu.com" To: Bob Pearson , "jgg@nvidia.com" , "zyjzyj2000@gmail.com" , "linux-rdma@vger.kernel.org" , "mie@igel.co.jp" , "bvanassche@acm.org" Subject: =?eucgb2312_cn?b?u9i4tDogW1BBVENIIGZvci1yYyB2MyAxLzZdIFJETUEvcnhlOiBBZGQg?= =?eucgb2312_cn?b?bWVtb3J5IGJhcnJpZXJzIHRvIGtlcm5lbCBxdWV1ZXM=?= Thread-Topic: [PATCH for-rc v3 1/6] RDMA/rxe: Add memory barriers to kernel queues Thread-Index: AQHXpbvJcgg9K30mSkyNF5ndLfAOLqujEGSw Date: Tue, 14 Sep 2021 06:04:18 +0000 Message-ID: References: <20210909204456.7476-1-rpearsonhpe@gmail.com> <20210909204456.7476-2-rpearsonhpe@gmail.com> In-Reply-To: <20210909204456.7476-2-rpearsonhpe@gmail.com> Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=fujitsu.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: e5d2938b-bc0b-43b8-8d28-08d977457cea x-ms-traffictypediagnostic: OSAPR01MB2547: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7219; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: P17nj2gIror0WFv8hJPTm19I5M6htKSULeZVI1KRPL+Z/Mot0fPUmw4xIHfPHgwCikPC5+4U+ZFX7tK75doiRpjzprc1+wUe9+lyrg/CVnj2KqeFvJx2Nad/6LtAqRL1nWYAi9D/7OmJtwaY5e8DPC/w+WpLtLl6UUrVKyxg0AYuex7yO13d4aKyJOUDueoLo1l5yewoLj1TarLpXklJvPa/1N7mCf+ZVqKdoa026QeKuZtRi+jUqq3s8nIufleZu0yTholyDpha88f8zb/j5iJwSq6rvUH7eNxfm8cZ7nVOYzy7FBVGyNnMTszeE3NIKeigtShJNmQ+AJzixTc4mB/OxZ8ebaf6WB+5kzp3YmCUo8yLf56QQT5YW/thU98hyKAn843cKTIpwoNQBagvo/NixA7ZJ8Mo1ljOs/fv4WCSvmyozW1hOgIpydu/JTNlTEk+XnuiROtn7mffRL1Hmi7jw3tRVk8wnf2yEJwzOn4u5pG4ZiE03N861BuuVptUbnTXEWLBxCxbjcppQIirwp8dX4OnnqMbc0VqUcXFT8IslbPNHSOo87pltj7Bhw0g3FGdYUR0jlS+AT40d8dRkRdabSDip5085IuPo8/11y0D1CkOBPmVoNFphaFxrG2LpBywaC0YK6tqXb80J6rMAmuH1D1cwd4Dz8CeqGR5XcS8FlmlfJh/vsCjeU47eau9LSYsf4xYU/ZLdDhS/GDShA== x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:OS0PR01MB6371.jpnprd01.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(122000001)(66476007)(83380400001)(7696005)(66556008)(64756008)(316002)(508600001)(110136005)(66446008)(71200400001)(66946007)(33656002)(26005)(6506007)(8936002)(38070700005)(38100700002)(186003)(52536014)(85182001)(9686003)(224303003)(55016002)(2906002)(5660300002)(86362001)(76116006)(30864003)(579004)(559001);DIR:OUT;SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?eucgb2312_cn?b?SUVZdy93VEl0TWNkY1gvakMv?= =?eucgb2312_cn?b?M3FQK1pYL1Rpc0FodGFHcU8rVGs2RXc0aFZqZXVVMUZ2QmRjNm1sZ25zQVloTG5y?= =?eucgb2312_cn?b?V09xYkhRYnozcDRRYnY3NFdDenhvS2J3Tm94dFFiRUg0cXZ6MHVhSU9PQytVU010?= =?eucgb2312_cn?b?WVBkRHVFOUlGRzBWN1IwcWIyQW1yS3pZK3ZZVGlhb2E2RDE3eExDWFlZVlFOa3la?= =?eucgb2312_cn?b?N2d0TmtseGJmS0x6NGExbjZ1aGh3eUQzKzJpMXVEdjdxeVV5RE9SVDBIc3lFVCtF?= =?eucgb2312_cn?b?Tnk0L3ZscWZ3blVIaUhhd2ZIdngyS1hVd0pvVnRaY2llVVRqNVVqOVBleVJUdjNm?= =?eucgb2312_cn?b?OU1MWm9OMHR3QUxXRkhLUGtuL0ZPalNrWGo0SHpRcFRLWW1Pa2RqWFB4RDRMdys2?= =?eucgb2312_cn?b?cjhqUEJRUnVLT1dnaGtBUXAzSktRSS9Wa1MwcW9KRnVNL1dmcFZySVNsUk9hYnBT?= =?eucgb2312_cn?b?ajF5d1NwUTRPQ3VxaURNNkVtdURtL1JJRmlVdTA1K1VlOWVEUFpMQVB2Z1Zncmlq?= =?eucgb2312_cn?b?VzlrS3QxMjg3aUJ3UXRseGtEUjR1aitvR3l0MEpiTmoyTGNQL1hhMFNNOHUyUUt3?= =?eucgb2312_cn?b?R2RxUFZIWi9tWFQyK0F5bG9IWkpSeVkxMjY5dFYzWmpCN1UwVVBLOHJnRW85QlND?= =?eucgb2312_cn?b?VTNBK3BZR1hjaXNHV3lsOEJhVnNuSFQ2UnVyWm9qUlhPUWo5b2ZlQmRhT2JzV25w?= =?eucgb2312_cn?b?czgvN0Q1ZW4xMUN2V3d6YjNWa3BQL2RVSmkxYzRkTThianRHUkpDWGMyZmg2Ympo?= =?eucgb2312_cn?b?L1RPa0VibTVlcU83VzNEZ0ZHaE9BRStidUNsamRoOXBEN0IzcDdvNUJVL3dVSDVV?= =?eucgb2312_cn?b?ekp5YzJaNEtWYzlNOXd6VU9Uclh5K3l2am9DeHRaR3lSL1JxY1VXWWp6cERFYS9J?= =?eucgb2312_cn?b?OGNwR0FaS29hQ3VnL3YrdVlZQzB1cHJZdmFPRE1GQU8zcHpaZkJucWFQcGsvWlJm?= =?eucgb2312_cn?b?QWpxa1RyS0p2M1AxdDFsYVZjU1FZL2ppdGtKeVl3SFlqQmFudlY1eWNjb2piS3Yx?= =?eucgb2312_cn?b?OU1EQXRPMytaMG0xcDhpTlhIQWEwLyt3NElIcHJxK0plUGJkVFg4dm9DZTIwM1Ez?= =?eucgb2312_cn?b?aTFnZW52ZWJvdUp3NzloSGRsSmxvV0RCNkhsTHRJOHRrTkpHU0hWTmdSaklOaGxm?= =?eucgb2312_cn?b?b0ZBdnpxWTlONGsvdWJGQUsrOVh0MHI4M3c2NHNWM0FoQ1E1dlN5SUNLdGpvMG1J?= =?eucgb2312_cn?b?MFJoblJROWVvbTFqYThGM1I5M3lQd3gvVVgyYTFRZEFQZnVUSm40M0xmcTl6TlIv?= =?eucgb2312_cn?b?RldXYkNGeFR2RmtrWWxwdDN1QXhtTmFPVmVDYUF6ZXhNamFhNDZjRktHeWExamJU?= =?eucgb2312_cn?b?WVJZZkJtYjZtdmFuaGFzSTZ5T1hvZXBINnRPdS9hZVl0WFkxVTJ0QTN1WTE0N0J1?= =?eucgb2312_cn?b?dnRKYW9UcllWU1Y3U1NZTDJrOTh2M0dhVVlqWEpYOEt5VDJHRFJSUzI0d2FJdlNC?= =?eucgb2312_cn?b?LzlaUC9jWitlYTQ4NW9WWFA2TENpL05OcmN3QVVubE1kYzYvTlROcWNwMzFLVGlK?= =?eucgb2312_cn?b?cldnWlRXTTVjOTQvN0VkQ3FCbnhMNWkwenBmd3J6YkhrQ3ZSODJKRTVQRElQaklQ?= =?eucgb2312_cn?b?eEVHY0lFOEVucEtadzB6ZmNWQ0Y5bjZ2eFFMNFRSMkpxM0ppdjZxUk5ZT0J5VlZR?= =?eucgb2312_cn?b?MHY2SnFoa2Z4K3pQcVdhZm11RE8yUVZRM3lkT3p0cEVhWEI4SXJCbUZvOWVxNCtV?= =?eucgb2312_cn?b?cGtQZHBMVXNXd1RWMDhvc2RRUEl6ckdGdEpOWHFs?= x-ms-exchange-transport-forked: True MIME-Version: 1.0 X-OriginatorOrg: fujitsu.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: OS0PR01MB6371.jpnprd01.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: e5d2938b-bc0b-43b8-8d28-08d977457cea X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Sep 2021 06:04:18.1203 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a19f121d-81e1-4858-a9d8-736e267fd4c7 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: V3xFmMRpBAttVj1L1pem98FkXpsMT6p2nFGICXmvkQIlhhAt2Tb1MNEL/EshcEHrF4dNc4o4fzxaeedceipzjQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: OSAPR01MB2547 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Hi Bob, Why do you want to use FROM_CLIENT and TO_CLIENT suffix? It seems readable to use FROM_USER and TO_USER suffix (i.e. between user space and kernel space). Best Regards, Xiao Yang -----邮件原件----- 发件人: Bob Pearson 发送时间: 2021年9月10日 4:45 收件人: jgg@nvidia.com; zyjzyj2000@gmail.com; linux-rdma@vger.kernel.org; mie@igel.co.jp; bvanassche@acm.org 抄送: Bob Pearson 主题: [PATCH for-rc v3 1/6] RDMA/rxe: Add memory barriers to kernel queues Earlier patches added memory barriers to protect user space to kernel space communications. The user space queues were previously shown to have occasional memory synchonization errors which were removed by adding smp_load_acquire, smp_store_release barriers. This patch extends that to the case where queues are used between kernel space threads. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 10 +--- drivers/infiniband/sw/rxe/rxe_cq.c | 25 ++------- drivers/infiniband/sw/rxe/rxe_qp.c | 10 ++-- drivers/infiniband/sw/rxe/rxe_queue.h | 73 ++++++++------------------- drivers/infiniband/sw/rxe/rxe_req.c | 21 ++------ drivers/infiniband/sw/rxe/rxe_resp.c | 38 ++++---------- drivers/infiniband/sw/rxe/rxe_srq.c | 2 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 53 ++++--------------- 8 files changed, 55 insertions(+), 177 deletions(-) cq->notify = flags & IB_CQ_SOLICITED_MASK; - if (cq->is_user) - empty = queue_empty(cq->queue, QUEUE_TYPE_TO_USER); - else - empty = queue_empty(cq->queue, QUEUE_TYPE_KERNEL); + empty = queue_empty(cq->queue, QUEUE_TYPE_TO_CLIENT); if ((flags & IB_CQ_REPORT_MISSED_EVENTS) && !empty) ret = 1; -- 2.30.2 diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index d2d802c776fd..ed4e3f29bd65 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -142,10 +142,7 @@ static inline enum comp_state get_wqe(struct rxe_qp *qp, /* we come here whether or not we found a response packet to see if * there are any posted WQEs */ - if (qp->is_user) - wqe = queue_head(qp->sq.queue, QUEUE_TYPE_FROM_USER); - else - wqe = queue_head(qp->sq.queue, QUEUE_TYPE_KERNEL); + wqe = queue_head(qp->sq.queue, QUEUE_TYPE_FROM_CLIENT); *wqe_p = wqe; /* no WQE or requester has not started it yet */ @@ -432,10 +429,7 @@ static void do_complete(struct rxe_qp *qp, struct rxe_send_wqe *wqe) if (post) make_send_cqe(qp, wqe, &cqe); - if (qp->is_user) - advance_consumer(qp->sq.queue, QUEUE_TYPE_FROM_USER); - else - advance_consumer(qp->sq.queue, QUEUE_TYPE_KERNEL); + advance_consumer(qp->sq.queue, QUEUE_TYPE_FROM_CLIENT); if (post) rxe_cq_post(qp->scq, &cqe, 0); diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c index aef288f164fd..4e26c2ea4a59 100644 --- a/drivers/infiniband/sw/rxe/rxe_cq.c +++ b/drivers/infiniband/sw/rxe/rxe_cq.c @@ -25,11 +25,7 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq, } if (cq) { - if (cq->is_user) - count = queue_count(cq->queue, QUEUE_TYPE_TO_USER); - else - count = queue_count(cq->queue, QUEUE_TYPE_KERNEL); - + count = queue_count(cq->queue, QUEUE_TYPE_TO_CLIENT); if (cqe < count) { pr_warn("cqe(%d) < current # elements in queue (%d)", cqe, count); @@ -65,7 +61,7 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe, int err; enum queue_type type; - type = uresp ? QUEUE_TYPE_TO_USER : QUEUE_TYPE_KERNEL; + type = QUEUE_TYPE_TO_CLIENT; cq->queue = rxe_queue_init(rxe, &cqe, sizeof(struct rxe_cqe), type); if (!cq->queue) { @@ -117,11 +113,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited) spin_lock_irqsave(&cq->cq_lock, flags); - if (cq->is_user) - full = queue_full(cq->queue, QUEUE_TYPE_TO_USER); - else - full = queue_full(cq->queue, QUEUE_TYPE_KERNEL); - + full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT); if (unlikely(full)) { spin_unlock_irqrestore(&cq->cq_lock, flags); if (cq->ibcq.event_handler) { @@ -134,17 +126,10 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited) return -EBUSY; } - if (cq->is_user) - addr = producer_addr(cq->queue, QUEUE_TYPE_TO_USER); - else - addr = producer_addr(cq->queue, QUEUE_TYPE_KERNEL); - + addr = producer_addr(cq->queue, QUEUE_TYPE_TO_CLIENT); memcpy(addr, cqe, sizeof(*cqe)); - if (cq->is_user) - advance_producer(cq->queue, QUEUE_TYPE_TO_USER); - else - advance_producer(cq->queue, QUEUE_TYPE_KERNEL); + advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT); spin_unlock_irqrestore(&cq->cq_lock, flags); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 1ab6af7ddb25..2e923af642f8 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -231,7 +231,7 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, qp->sq.max_inline = init->cap.max_inline_data = wqe_size; wqe_size += sizeof(struct rxe_send_wqe); - type = uresp ? QUEUE_TYPE_FROM_USER : QUEUE_TYPE_KERNEL; + type = QUEUE_TYPE_FROM_CLIENT; qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size, type); if (!qp->sq.queue) @@ -248,12 +248,8 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, return err; } - if (qp->is_user) qp->req.wqe_index = producer_index(qp->sq.queue, - QUEUE_TYPE_FROM_USER); - else - qp->req.wqe_index = producer_index(qp->sq.queue, - QUEUE_TYPE_KERNEL); + QUEUE_TYPE_FROM_CLIENT); qp->req.state = QP_STATE_RESET; qp->req.opcode = -1; @@ -293,7 +289,7 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, pr_debug("qp#%d max_wr = %d, max_sge = %d, wqe_size = %d\n", qp_num(qp), qp->rq.max_wr, qp->rq.max_sge, wqe_size); - type = uresp ? QUEUE_TYPE_FROM_USER : QUEUE_TYPE_KERNEL; + type = QUEUE_TYPE_FROM_CLIENT; qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, wqe_size, type); if (!qp->rq.queue) diff --git a/drivers/infiniband/sw/rxe/rxe_queue.h b/drivers/infiniband/sw/rxe/rxe_queue.h index 2702b0e55fc3..d465aa9342e1 100644 --- a/drivers/infiniband/sw/rxe/rxe_queue.h +++ b/drivers/infiniband/sw/rxe/rxe_queue.h @@ -35,9 +35,8 @@ /* type of queue */ enum queue_type { - QUEUE_TYPE_KERNEL, - QUEUE_TYPE_TO_USER, - QUEUE_TYPE_FROM_USER, + QUEUE_TYPE_TO_CLIENT, + QUEUE_TYPE_FROM_CLIENT, }; struct rxe_queue { @@ -87,20 +86,16 @@ static inline int queue_empty(struct rxe_queue *q, enum queue_type type) u32 cons; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: /* protect user space index */ prod = smp_load_acquire(&q->buf->producer_index); cons = q->index; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: prod = q->index; /* protect user space index */ cons = smp_load_acquire(&q->buf->consumer_index); break; - case QUEUE_TYPE_KERNEL: - prod = q->buf->producer_index; - cons = q->buf->consumer_index; - break; } return ((prod - cons) & q->index_mask) == 0; @@ -112,20 +107,16 @@ static inline int queue_full(struct rxe_queue *q, enum queue_type type) u32 cons; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: /* protect user space index */ prod = smp_load_acquire(&q->buf->producer_index); cons = q->index; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: prod = q->index; /* protect user space index */ cons = smp_load_acquire(&q->buf->consumer_index); break; - case QUEUE_TYPE_KERNEL: - prod = q->buf->producer_index; - cons = q->buf->consumer_index; - break; } return ((prod + 1 - cons) & q->index_mask) == 0; @@ -138,20 +129,16 @@ static inline unsigned int queue_count(const struct rxe_queue *q, u32 cons; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: /* protect user space index */ prod = smp_load_acquire(&q->buf->producer_index); cons = q->index; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: prod = q->index; /* protect user space index */ cons = smp_load_acquire(&q->buf->consumer_index); break; - case QUEUE_TYPE_KERNEL: - prod = q->buf->producer_index; - cons = q->buf->consumer_index; - break; } return (prod - cons) & q->index_mask; @@ -162,7 +149,7 @@ static inline void advance_producer(struct rxe_queue *q, enum queue_type type) u32 prod; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: pr_warn_once("Normally kernel should not write user space index\n"); /* protect user space index */ prod = smp_load_acquire(&q->buf->producer_index); @@ -170,15 +157,11 @@ static inline void advance_producer(struct rxe_queue *q, enum queue_type type) /* same */ smp_store_release(&q->buf->producer_index, prod); break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: prod = q->index; q->index = (prod + 1) & q->index_mask; q->buf->producer_index = q->index; break; - case QUEUE_TYPE_KERNEL: - prod = q->buf->producer_index; - q->buf->producer_index = (prod + 1) & q->index_mask; - break; } } @@ -187,12 +170,12 @@ static inline void advance_consumer(struct rxe_queue *q, enum queue_type type) u32 cons; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: cons = q->index; q->index = (cons + 1) & q->index_mask; q->buf->consumer_index = q->index; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: pr_warn_once("Normally kernel should not write user space index\n"); /* protect user space index */ cons = smp_load_acquire(&q->buf->consumer_index); @@ -200,10 +183,6 @@ static inline void advance_consumer(struct rxe_queue *q, enum queue_type type) /* same */ smp_store_release(&q->buf->consumer_index, cons); break; - case QUEUE_TYPE_KERNEL: - cons = q->buf->consumer_index; - q->buf->consumer_index = (cons + 1) & q->index_mask; - break; } } @@ -212,17 +191,14 @@ static inline void *producer_addr(struct rxe_queue *q, enum queue_type type) u32 prod; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: /* protect user space index */ prod = smp_load_acquire(&q->buf->producer_index); prod &= q->index_mask; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: prod = q->index; break; - case QUEUE_TYPE_KERNEL: - prod = q->buf->producer_index; - break; } return q->buf->data + (prod << q->log2_elem_size); @@ -233,17 +209,14 @@ static inline void *consumer_addr(struct rxe_queue *q, enum queue_type type) u32 cons; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: cons = q->index; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: /* protect user space index */ cons = smp_load_acquire(&q->buf->consumer_index); cons &= q->index_mask; break; - case QUEUE_TYPE_KERNEL: - cons = q->buf->consumer_index; - break; } return q->buf->data + (cons << q->log2_elem_size); @@ -255,17 +228,14 @@ static inline unsigned int producer_index(struct rxe_queue *q, u32 prod; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: /* protect user space index */ prod = smp_load_acquire(&q->buf->producer_index); prod &= q->index_mask; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: prod = q->index; break; - case QUEUE_TYPE_KERNEL: - prod = q->buf->producer_index; - break; } return prod; @@ -277,17 +247,14 @@ static inline unsigned int consumer_index(struct rxe_queue *q, u32 cons; switch (type) { - case QUEUE_TYPE_FROM_USER: + case QUEUE_TYPE_FROM_CLIENT: cons = q->index; break; - case QUEUE_TYPE_TO_USER: + case QUEUE_TYPE_TO_CLIENT: /* protect user space index */ cons = smp_load_acquire(&q->buf->consumer_index); cons &= q->index_mask; break; - case QUEUE_TYPE_KERNEL: - cons = q->buf->consumer_index; - break; } return cons; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 3894197a82f6..22c3edb28945 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -49,13 +49,8 @@ static void req_retry(struct rxe_qp *qp) unsigned int cons; unsigned int prod; - if (qp->is_user) { - cons = consumer_index(q, QUEUE_TYPE_FROM_USER); - prod = producer_index(q, QUEUE_TYPE_FROM_USER); - } else { - cons = consumer_index(q, QUEUE_TYPE_KERNEL); - prod = producer_index(q, QUEUE_TYPE_KERNEL); - } + cons = consumer_index(q, QUEUE_TYPE_FROM_CLIENT); + prod = producer_index(q, QUEUE_TYPE_FROM_CLIENT); qp->req.wqe_index = cons; qp->req.psn = qp->comp.psn; @@ -121,15 +116,9 @@ static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp) unsigned int cons; unsigned int prod; - if (qp->is_user) { - wqe = queue_head(q, QUEUE_TYPE_FROM_USER); - cons = consumer_index(q, QUEUE_TYPE_FROM_USER); - prod = producer_index(q, QUEUE_TYPE_FROM_USER); - } else { - wqe = queue_head(q, QUEUE_TYPE_KERNEL); - cons = consumer_index(q, QUEUE_TYPE_KERNEL); - prod = producer_index(q, QUEUE_TYPE_KERNEL); - } + wqe = queue_head(q, QUEUE_TYPE_FROM_CLIENT); + cons = consumer_index(q, QUEUE_TYPE_FROM_CLIENT); + prod = producer_index(q, QUEUE_TYPE_FROM_CLIENT); if (unlikely(qp->req.state == QP_STATE_DRAIN)) { /* check to see if we are drained; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 5501227ddc65..596be002d33d 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -303,10 +303,7 @@ static enum resp_states get_srq_wqe(struct rxe_qp *qp) spin_lock_bh(&srq->rq.consumer_lock); - if (qp->is_user) - wqe = queue_head(q, QUEUE_TYPE_FROM_USER); - else - wqe = queue_head(q, QUEUE_TYPE_KERNEL); + wqe = queue_head(q, QUEUE_TYPE_FROM_CLIENT); if (!wqe) { spin_unlock_bh(&srq->rq.consumer_lock); return RESPST_ERR_RNR; @@ -322,13 +319,8 @@ static enum resp_states get_srq_wqe(struct rxe_qp *qp) memcpy(&qp->resp.srq_wqe, wqe, size); qp->resp.wqe = &qp->resp.srq_wqe.wqe; - if (qp->is_user) { - advance_consumer(q, QUEUE_TYPE_FROM_USER); - count = queue_count(q, QUEUE_TYPE_FROM_USER); - } else { - advance_consumer(q, QUEUE_TYPE_KERNEL); - count = queue_count(q, QUEUE_TYPE_KERNEL); - } + advance_consumer(q, QUEUE_TYPE_FROM_CLIENT); + count = queue_count(q, QUEUE_TYPE_FROM_CLIENT); if (srq->limit && srq->ibsrq.event_handler && (count < srq->limit)) { srq->limit = 0; @@ -357,12 +349,8 @@ static enum resp_states check_resource(struct rxe_qp *qp, qp->resp.status = IB_WC_WR_FLUSH_ERR; return RESPST_COMPLETE; } else if (!srq) { - if (qp->is_user) - qp->resp.wqe = queue_head(qp->rq.queue, - QUEUE_TYPE_FROM_USER); - else - qp->resp.wqe = queue_head(qp->rq.queue, - QUEUE_TYPE_KERNEL); + qp->resp.wqe = queue_head(qp->rq.queue, + QUEUE_TYPE_FROM_CLIENT); if (qp->resp.wqe) { qp->resp.status = IB_WC_WR_FLUSH_ERR; return RESPST_COMPLETE; @@ -389,12 +377,8 @@ static enum resp_states check_resource(struct rxe_qp *qp, if (srq) return get_srq_wqe(qp); - if (qp->is_user) - qp->resp.wqe = queue_head(qp->rq.queue, - QUEUE_TYPE_FROM_USER); - else - qp->resp.wqe = queue_head(qp->rq.queue, - QUEUE_TYPE_KERNEL); + qp->resp.wqe = queue_head(qp->rq.queue, + QUEUE_TYPE_FROM_CLIENT); return (qp->resp.wqe) ? RESPST_CHK_LENGTH : RESPST_ERR_RNR; } @@ -936,12 +920,8 @@ static enum resp_states do_complete(struct rxe_qp *qp, } /* have copy for srq and reference for !srq */ - if (!qp->srq) { - if (qp->is_user) - advance_consumer(qp->rq.queue, QUEUE_TYPE_FROM_USER); - else - advance_consumer(qp->rq.queue, QUEUE_TYPE_KERNEL); - } + if (!qp->srq) + advance_consumer(qp->rq.queue, QUEUE_TYPE_FROM_CLIENT); qp->resp.wqe = NULL; diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 610c98d24b5c..a9e7817e2732 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -93,7 +93,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, spin_lock_init(&srq->rq.producer_lock); spin_lock_init(&srq->rq.consumer_lock); - type = uresp ? QUEUE_TYPE_FROM_USER : QUEUE_TYPE_KERNEL; + type = QUEUE_TYPE_FROM_CLIENT; q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); if (!q) { diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 267b5a9c345d..dc70e3edeba6 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -218,11 +218,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr) int num_sge = ibwr->num_sge; int full; - if (rq->is_user) - full = queue_full(rq->queue, QUEUE_TYPE_FROM_USER); - else - full = queue_full(rq->queue, QUEUE_TYPE_KERNEL); - + full = queue_full(rq->queue, QUEUE_TYPE_FROM_CLIENT); if (unlikely(full)) { err = -ENOMEM; goto err1; @@ -237,11 +233,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr) for (i = 0; i < num_sge; i++) length += ibwr->sg_list[i].length; - if (rq->is_user) - recv_wqe = producer_addr(rq->queue, QUEUE_TYPE_FROM_USER); - else - recv_wqe = producer_addr(rq->queue, QUEUE_TYPE_KERNEL); - + recv_wqe = producer_addr(rq->queue, QUEUE_TYPE_FROM_CLIENT); recv_wqe->wr_id = ibwr->wr_id; recv_wqe->num_sge = num_sge; @@ -254,10 +246,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr) recv_wqe->dma.cur_sge = 0; recv_wqe->dma.sge_offset = 0; - if (rq->is_user) - advance_producer(rq->queue, QUEUE_TYPE_FROM_USER); - else - advance_producer(rq->queue, QUEUE_TYPE_KERNEL); + advance_producer(rq->queue, QUEUE_TYPE_FROM_CLIENT); return 0; @@ -633,27 +622,17 @@ static int post_one_send(struct rxe_qp *qp, const struct ib_send_wr *ibwr, spin_lock_irqsave(&qp->sq.sq_lock, flags); - if (qp->is_user) - full = queue_full(sq->queue, QUEUE_TYPE_FROM_USER); - else - full = queue_full(sq->queue, QUEUE_TYPE_KERNEL); + full = queue_full(sq->queue, QUEUE_TYPE_FROM_CLIENT); if (unlikely(full)) { spin_unlock_irqrestore(&qp->sq.sq_lock, flags); return -ENOMEM; } - if (qp->is_user) - send_wqe = producer_addr(sq->queue, QUEUE_TYPE_FROM_USER); - else - send_wqe = producer_addr(sq->queue, QUEUE_TYPE_KERNEL); - + send_wqe = producer_addr(sq->queue, QUEUE_TYPE_FROM_CLIENT); init_send_wqe(qp, ibwr, mask, length, send_wqe); - if (qp->is_user) - advance_producer(sq->queue, QUEUE_TYPE_FROM_USER); - else - advance_producer(sq->queue, QUEUE_TYPE_KERNEL); + advance_producer(sq->queue, QUEUE_TYPE_FROM_CLIENT); spin_unlock_irqrestore(&qp->sq.sq_lock, flags); @@ -845,18 +824,12 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc) spin_lock_irqsave(&cq->cq_lock, flags); for (i = 0; i < num_entries; i++) { - if (cq->is_user) - cqe = queue_head(cq->queue, QUEUE_TYPE_TO_USER); - else - cqe = queue_head(cq->queue, QUEUE_TYPE_KERNEL); + cqe = queue_head(cq->queue, QUEUE_TYPE_TO_CLIENT); if (!cqe) break; memcpy(wc++, &cqe->ibwc, sizeof(*wc)); - if (cq->is_user) - advance_consumer(cq->queue, QUEUE_TYPE_TO_USER); - else - advance_consumer(cq->queue, QUEUE_TYPE_KERNEL); + advance_consumer(cq->queue, QUEUE_TYPE_TO_CLIENT); } spin_unlock_irqrestore(&cq->cq_lock, flags); @@ -868,10 +841,7 @@ static int rxe_peek_cq(struct ib_cq *ibcq, int wc_cnt) struct rxe_cq *cq = to_rcq(ibcq); int count; - if (cq->is_user) - count = queue_count(cq->queue, QUEUE_TYPE_TO_USER); - else - count = queue_count(cq->queue, QUEUE_TYPE_KERNEL); + count = queue_count(cq->queue, QUEUE_TYPE_TO_CLIENT); return (count > wc_cnt) ? wc_cnt : count; } @@ -887,10 +857,7 @@ static int rxe_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) if (cq->notify != IB_CQ_NEXT_COMP)