From patchwork Tue Aug 1 10:37:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 9874169 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 856E160365 for ; Tue, 1 Aug 2017 10:42:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 905E128695 for ; Tue, 1 Aug 2017 10:42:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 854B62868C; Tue, 1 Aug 2017 10:42:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1B0512868B for ; Tue, 1 Aug 2017 10:42:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=X9KdTbu0AzVXC56xzmEJShH93+tK6VuIaA9/FUxHHvc=; b=UEF1ujhAWVv9WqnbsnEjKJ23mj DGrHuLnYS/5zxr5UxKr6gRrXYKYDHy1XGTcN5kdFuMx+CdJy8itqr/LLjwlu8XMy9zY0WpYASDnF0 jWMSfgLGFwF0wdfXeCNTfbGQ/lohFyjl4Lnn7C4Tj6ls1QMtrQU6+8G0xslcUu3cxIj58lR5omrsu Gt/uq5n5bAsttJv/70y0qrsEHwvh9RKXMYR0G0vy25+wqJOrYOxt+FxC6IfW3eQPTOFUdsR9aLhtv mg0vyzUAAOQJs5WLOnu4dPGYQxV4NmGFTLQDBUxnc+Zw7a7F9xfecP5swjWF7Rphw4fTnLx6crJtz GnU24f6g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dcUcf-0007BX-GT; Tue, 01 Aug 2017 10:41:57 +0000 Received: from mail-qk0-x235.google.com ([2607:f8b0:400d:c09::235]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dcUbV-0004uN-VK for linux-arm-kernel@lists.infradead.org; Tue, 01 Aug 2017 10:41:55 +0000 Received: by mail-qk0-x235.google.com with SMTP id d136so6468109qkg.3 for ; Tue, 01 Aug 2017 03:40:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Y0/JGs+wSDca0AzPTVCPwmKocIHR5msiQUhNUQeEmm8=; b=Y0y9jpzw58PpVyosE0WDt5A96VyjFpw7XjwMzVTOsErRIakd4uSZGWgOqx89O4pNij SpaUS7U7uLzwahecYHqoZzLTu+JnrMG5nDhgw8ENc8BsKEqxzzJnxgXgjJcEvpMnQTqv K6XRuqbeZ2stKBG7cj9S31VkcidJdZzholVts= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Y0/JGs+wSDca0AzPTVCPwmKocIHR5msiQUhNUQeEmm8=; b=ML/4kTwgGGb/9ukuMLAMFGSwA6/9or3eiuNuEm7LRG2EAYGqvdiXaz8dt54f75bw2q A1Lc4LBchyw4O3nCbdnW2MINI1rd9e5VmeLZGhnCt/d6DmDq/14OkmcNJOlTI6g+NrKd 93ClMLwB1Q1ecqJ0R4Vp3fcM+xTf0Ac5khRI34FuNZcMmYz3s5sELEAAAhuOBAF0U4un wVPGJoA49HdR/lo/wkFttF5BLd0mobxQdn/MaRVNTt1PJ/1hGE8uzfB32qhjdXakPDPG REdCuG+VqQvgztWKQwLQMuCj1cuQm9TzQdA9CsS02iHVL2du+b61fbRmlMKNK+R2ek0s EmIg== X-Gm-Message-State: AIVw113l45V00DVsHIn056kJDq83rwzi+5TbGK2CpaTiJGHIVPBWvoVS zIVHiMFsJJQZsFSM X-Received: by 10.55.11.211 with SMTP id 202mr23287520qkl.305.1501584024504; Tue, 01 Aug 2017 03:40:24 -0700 (PDT) Received: from anup-HP-Compaq-8100-Elite-CMT-PC.dhcp.avagotech.net ([192.19.237.250]) by smtp.gmail.com with ESMTPSA id e32sm21784162qtb.63.2017.08.01.03.40.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Aug 2017 03:40:24 -0700 (PDT) From: Anup Patel To: Rob Herring , Mark Rutland , Vinod Koul , Dan Williams Subject: [PATCH v2 11/16] dmaengine: bcm-sba-raid: Peek mbox when we have no free requests Date: Tue, 1 Aug 2017 16:07:55 +0530 Message-Id: <1501583880-32072-12-git-send-email-anup.patel@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1501583880-32072-1-git-send-email-anup.patel@broadcom.com> References: <1501583880-32072-1-git-send-email-anup.patel@broadcom.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170801_034046_658001_4C1E6BE5 X-CRM114-Status: GOOD ( 13.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, Florian Fainelli , Anup Patel , Scott Branden , Ray Jui , linux-kernel@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com, dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When setting up RAID array on several NVMe disks we observed that sba_alloc_request() start failing (due to no free requests left) and RAID array setup becomes very slow. To improve performance, we do mbox channel peek when we have no free requests. This improves performance of RAID array setup because mbox requests that were completed but not processed by mbox completion worker will be processed immediately by mbox channel peek. Signed-off-by: Anup Patel Reviewed-by: Ray Jui Reviewed-by: Scott Branden --- drivers/dma/bcm-sba-raid.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/dma/bcm-sba-raid.c b/drivers/dma/bcm-sba-raid.c index f14ed0a..399250e 100644 --- a/drivers/dma/bcm-sba-raid.c +++ b/drivers/dma/bcm-sba-raid.c @@ -200,6 +200,14 @@ static inline u32 __pure sba_cmd_pq_c_mdata(u32 d, u32 b1, u32 b0) /* ====== General helper routines ===== */ +static void sba_peek_mchans(struct sba_device *sba) +{ + int mchan_idx; + + for (mchan_idx = 0; mchan_idx < sba->mchans_count; mchan_idx++) + mbox_client_peek_data(sba->mchans[mchan_idx]); +} + static struct sba_request *sba_alloc_request(struct sba_device *sba) { unsigned long flags; @@ -211,8 +219,17 @@ static struct sba_request *sba_alloc_request(struct sba_device *sba) if (req) list_move_tail(&req->node, &sba->reqs_alloc_list); spin_unlock_irqrestore(&sba->reqs_lock, flags); - if (!req) + + if (!req) { + /* + * We have no more free requests so, we peek + * mailbox channels hoping few active requests + * would have completed which will create more + * room for new requests. + */ + sba_peek_mchans(sba); return NULL; + } req->flags = SBA_REQUEST_STATE_ALLOCED; req->first = req; @@ -560,17 +577,15 @@ static enum dma_status sba_tx_status(struct dma_chan *dchan, dma_cookie_t cookie, struct dma_tx_state *txstate) { - int mchan_idx; enum dma_status ret; struct sba_device *sba = to_sba_device(dchan); - for (mchan_idx = 0; mchan_idx < sba->mchans_count; mchan_idx++) - mbox_client_peek_data(sba->mchans[mchan_idx]); - ret = dma_cookie_status(dchan, cookie, txstate); if (ret == DMA_COMPLETE) return ret; + sba_peek_mchans(sba); + return dma_cookie_status(dchan, cookie, txstate); }