From patchwork Mon Oct 12 12:57:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajkumar Manoharan X-Patchwork-Id: 7374881 Return-Path: X-Original-To: patchwork-ath10k@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4A3499F1D5 for ; Mon, 12 Oct 2015 12:59:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 27006205F2 for ; Mon, 12 Oct 2015 12:59:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 23DEB2064B for ; Mon, 12 Oct 2015 12:59:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Zlch7-00086S-Ni; Mon, 12 Oct 2015 12:59:13 +0000 Received: from sabertooth02.qualcomm.com ([65.197.215.38]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Zlch5-0007vY-RN for ath10k@lists.infradead.org; Mon, 12 Oct 2015 12:59:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=qti.qualcomm.com; i=@qti.qualcomm.com; q=dns/txt; s=qcdkim; t=1444654751; x=1476190751; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=YpupXK+5BjjbWbUGkvYO3du0i92FH5QIPpsQUQAvkyQ=; b=RMQGmJJYW/av+4hpfHrOqsKmGcaf8kjRPC6MfucLRznYnpdzlj9Gvi3d SEOHWqbqxCPgjz8wluv76PRkFKqSNsU4+poTetfZQc71IeIEmGB5koyLB n9TqxyiHIplyuaUgvQNs18PoOGaa0E7svg6lKRwqBv3RiJHYiuCBkYMzW s=; X-IronPort-AV: E=McAfee;i="5700,7163,7951"; a="99603184" Received: from ironmsg04-r.qualcomm.com ([172.30.46.18]) by sabertooth02.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Oct 2015 05:58:54 -0700 X-IronPort-AV: E=Sophos; i="5.17,672,1437462000"; d="scan'208"; a="1067191610" Received: from nasanexm02e.na.qualcomm.com ([10.85.0.86]) by Ironmsg04-R.qualcomm.com with ESMTP/TLS/RC4-SHA; 12 Oct 2015 05:58:54 -0700 Received: from aphydexm01b.ap.qualcomm.com (10.252.127.11) by nasanexm02e.na.qualcomm.com (10.85.0.86) with Microsoft SMTP Server (TLS) id 15.0.1076.9; Mon, 12 Oct 2015 05:58:45 -0700 Received: from qcmail1.qualcomm.com (10.80.80.8) by aphydexm01b.ap.qualcomm.com (10.252.127.11) with Microsoft SMTP Server (TLS) id 15.0.1076.9; Mon, 12 Oct 2015 18:28:37 +0530 Received: by qcmail1.qualcomm.com (sSMTP sendmail emulation); Mon, 12 Oct 2015 18:28:35 +0530 From: Rajkumar Manoharan To: Subject: [PATCH v2 5/7] ath10k: Configure copy engine 5 for HTT messages Date: Mon, 12 Oct 2015 18:27:04 +0530 Message-ID: <1444654626-3290-6-git-send-email-rmanohar@qti.qualcomm.com> X-Mailer: git-send-email 2.6.0 In-Reply-To: <1444654626-3290-1-git-send-email-rmanohar@qti.qualcomm.com> References: <1444654626-3290-1-git-send-email-rmanohar@qti.qualcomm.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: NASANEXM01B.na.qualcomm.com (10.85.0.82) To aphydexm01b.ap.qualcomm.com (10.252.127.11) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151012_055912_033284_8883F313 X-CRM114-Status: GOOD ( 14.10 ) X-Spam-Score: -7.0 (-------) X-BeenThere: ath10k@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-wireless@vger.kernel.org, Rajkumar Manoharan Sender: "ath10k" Errors-To: ath10k-bounces+patchwork-ath10k=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently target to host (T2H) HTT messages are received at copy engine 1. These messages are processed by HTC layer in both host and target. To avoid HTC level processing overhead in both host and target, the unused copy engine 5 is being used for receiving HTT T2H messages. This will speedup the receive data processing as well as htt tx completion. Hence host and target copy engine configuration tables are updated to enable CE5 pipe. The in-direction HTT mapping is now pointing to CE5 for all HTT T2H. Moreover HTT send completion messages are polled from HTC handler as CE 4 is not interrupt-driven. For faster tx completion, CE4 polling needs to be done whenever CE pipe which transports HTT Rx (target->host) is processed. This avoids overhead of polling HTT messages from HTC layer. Servicing CE 4 faster is helping to solve "failed to transmit packet, dropping: -105". Reviewed-by: Michal Kazior Signed-off-by: Rajkumar Manoharan --- v2: * fix invalid dma memory access (ATH10K_SKB_RXCB is used instead of ATH10K_SKB_CB in htt_tx_cb) * Process CE 4 send completion first before processing rx drivers/net/wireless/ath/ath10k/pci.c | 70 +++++++++++++++++++++++++++++------ 1 file changed, 59 insertions(+), 11 deletions(-) diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c index efeb766..447c4c6 100644 --- a/drivers/net/wireless/ath/ath10k/pci.c +++ b/drivers/net/wireless/ath/ath10k/pci.c @@ -106,6 +106,8 @@ static int ath10k_pci_bmi_wait(struct ath10k_ce_pipe *tx_pipe, static int ath10k_pci_qca99x0_chip_reset(struct ath10k *ar); static void ath10k_pci_htc_tx_cb(struct ath10k_ce_pipe *ce_state); static void ath10k_pci_htc_rx_cb(struct ath10k_ce_pipe *ce_state); +static void ath10k_pci_htt_tx_cb(struct ath10k_ce_pipe *ce_state); +static void ath10k_pci_htt_rx_cb(struct ath10k_ce_pipe *ce_state); static const struct ce_attr host_ce_config_wlan[] = { /* CE0: host->target HTC control and raw streams */ @@ -150,15 +152,16 @@ static const struct ce_attr host_ce_config_wlan[] = { .src_nentries = CE_HTT_H2T_MSG_SRC_NENTRIES, .src_sz_max = 256, .dest_nentries = 0, - .send_cb = ath10k_pci_htc_tx_cb, + .send_cb = ath10k_pci_htt_tx_cb, }, - /* CE5: unused */ + /* CE5: target->host HTT (HIF->HTT) */ { .flags = CE_ATTR_FLAGS, .src_nentries = 0, - .src_sz_max = 0, - .dest_nentries = 0, + .src_sz_max = 512, + .dest_nentries = 512, + .recv_cb = ath10k_pci_htt_rx_cb, }, /* CE6: target autonomous hif_memcpy */ @@ -264,12 +267,12 @@ static const struct ce_pipe_config target_ce_config_wlan[] = { /* NB: 50% of src nentries, since tx has 2 frags */ - /* CE5: unused */ + /* CE5: target->host HTT (HIF->HTT) */ { .pipenum = __cpu_to_le32(5), - .pipedir = __cpu_to_le32(PIPEDIR_OUT), + .pipedir = __cpu_to_le32(PIPEDIR_IN), .nentries = __cpu_to_le32(32), - .nbytes_max = __cpu_to_le32(2048), + .nbytes_max = __cpu_to_le32(512), .flags = __cpu_to_le32(CE_ATTR_FLAGS), .reserved = __cpu_to_le32(0), }, @@ -403,7 +406,7 @@ static const struct service_to_pipe target_service_to_ce_map_wlan[] = { { __cpu_to_le32(ATH10K_HTC_SVC_ID_HTT_DATA_MSG), __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */ - __cpu_to_le32(1), + __cpu_to_le32(5), }, /* (Additions here) */ @@ -1174,8 +1177,9 @@ static void ath10k_pci_htc_tx_cb(struct ath10k_ce_pipe *ce_state) ath10k_htc_tx_completion_handler(ar, skb); } -/* Called by lower (CE) layer when data is received from the Target. */ -static void ath10k_pci_htc_rx_cb(struct ath10k_ce_pipe *ce_state) +static void ath10k_pci_process_rx_cb(struct ath10k_ce_pipe *ce_state, + void (*callback)(struct ath10k *ar, + struct sk_buff *skb)) { struct ath10k *ar = ce_state->ar; struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); @@ -1214,12 +1218,56 @@ static void ath10k_pci_htc_rx_cb(struct ath10k_ce_pipe *ce_state) ath10k_dbg_dump(ar, ATH10K_DBG_PCI_DUMP, NULL, "pci rx: ", skb->data, skb->len); - ath10k_htc_rx_completion_handler(ar, skb); + callback(ar, skb); } ath10k_pci_rx_post_pipe(pipe_info); } +/* Called by lower (CE) layer when data is received from the Target. */ +static void ath10k_pci_htc_rx_cb(struct ath10k_ce_pipe *ce_state) +{ + ath10k_pci_process_rx_cb(ce_state, ath10k_htc_rx_completion_handler); +} + +/* Called by lower (CE) layer when a send to HTT Target completes. */ +static void ath10k_pci_htt_tx_cb(struct ath10k_ce_pipe *ce_state) +{ + struct ath10k *ar = ce_state->ar; + struct sk_buff *skb; + u32 ce_data; + unsigned int nbytes; + unsigned int transfer_id; + + while (ath10k_ce_completed_send_next(ce_state, (void **)&skb, &ce_data, + &nbytes, &transfer_id) == 0) { + /* no need to call tx completion for NULL pointers */ + if (!skb) + continue; + + dma_unmap_single(ar->dev, ATH10K_SKB_CB(skb)->paddr, + skb->len, DMA_TO_DEVICE); + ath10k_htt_hif_tx_complete(ar, skb); + } +} + +static void ath10k_pci_htt_rx_deliver(struct ath10k *ar, struct sk_buff *skb) +{ + skb_pull(skb, sizeof(struct ath10k_htc_hdr)); + ath10k_htt_t2h_msg_handler(ar, skb); +} + +/* Called by lower (CE) layer when HTT data is received from the Target. */ +static void ath10k_pci_htt_rx_cb(struct ath10k_ce_pipe *ce_state) +{ + /* CE4 polling needs to be done whenever CE pipe which transports + * HTT Rx (target->host) is processed. + */ + ath10k_ce_per_engine_service(ce_state->ar, 4); + + ath10k_pci_process_rx_cb(ce_state, ath10k_pci_htt_rx_deliver); +} + static int ath10k_pci_hif_tx_sg(struct ath10k *ar, u8 pipe_id, struct ath10k_hif_sg_item *items, int n_items) {