diff mbox series

ath10k: remove iteration in wake_tx_queue

Message ID 20190327162906.6010-1-erik.stromdahl@gmail.com (mailing list archive)
State New, archived
Headers show
Series ath10k: remove iteration in wake_tx_queue | expand

Commit Message

Erik Stromdahl March 27, 2019, 4:29 p.m. UTC
Iterating the TX queue and thereby dequeuing all available packets in the
queue could result in performance penalties on some SMP systems.

The reason for this is most likely that the per-ac lock (active_txq_lock)
in mac80211 will be held by the CPU iterating the current queue.

This will lock up other CPUs trying to push new messages on the TX
queue.

Instead of iterating the queue we fetch just one packet at the time,
resulting in minimal starvation of the other CPUs.

Reported-by: Yibo Zhao <yiboz@codeaurora.org>
Signed-off-by: Erik Stromdahl <erik.stromdahl@gmail.com>
---
 drivers/net/wireless/ath/ath10k/mac.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

Comments

Peter Oh March 27, 2019, 5:49 p.m. UTC | #1
On 03/27/2019 09:29 AM, Erik Stromdahl wrote:
> Iterating the TX queue and thereby dequeuing all available packets in the
> queue could result in performance penalties on some SMP systems.
>
Please share the test results and numbers you've run to help others 
thoughts.

Thanks,
Peter
Erik Stromdahl March 29, 2019, 7:47 a.m. UTC | #2
On 3/27/19 6:49 PM, Peter Oh wrote:
> 
> 
> On 03/27/2019 09:29 AM, Erik Stromdahl wrote:
>> Iterating the TX queue and thereby dequeuing all available packets in the
>> queue could result in performance penalties on some SMP systems.
>>
> Please share the test results and numbers you've run to help others
> thoughts.
> 

I haven't run any tests with ath10k PCIe, but Yibo Zhao had noticed a 10%
degradation without this patch.

Yibo:
Can you provide iperf results etc. that shows the performance gain?

--
Erik
Toke Høiland-Jørgensen April 1, 2019, 11:05 a.m. UTC | #3
Erik Stromdahl <erik.stromdahl@gmail.com> writes:

> Iterating the TX queue and thereby dequeuing all available packets in the
> queue could result in performance penalties on some SMP systems.
>
> The reason for this is most likely that the per-ac lock (active_txq_lock)
> in mac80211 will be held by the CPU iterating the current queue.
>
> This will lock up other CPUs trying to push new messages on the TX
> queue.
>
> Instead of iterating the queue we fetch just one packet at the time,
> resulting in minimal starvation of the other CPUs.

Did you test this with Felix' patches reducing the time the lock is held
in mac80211?

-Toke
Yibo Zhao April 1, 2019, 12:17 p.m. UTC | #4
On 2019-03-29 15:47, Erik Stromdahl wrote:
> On 3/27/19 6:49 PM, Peter Oh wrote:
>> 
>> 
>> On 03/27/2019 09:29 AM, Erik Stromdahl wrote:
>>> Iterating the TX queue and thereby dequeuing all available packets in 
>>> the
>>> queue could result in performance penalties on some SMP systems.
>>> 
>> Please share the test results and numbers you've run to help others
>> thoughts.
>> 
> 
> I haven't run any tests with ath10k PCIe, but Yibo Zhao had noticed a 
> 10%
> degradation without this patch.
> 
> Yibo:
> Can you provide iperf results etc. that shows the performance gain?

My tests are based on ixchariot with cabled setup(two-core AP system).
WDS mode--10% deviation:
With previous change: UDP DL-1246 Mbps
W/O previous change:  UDP DL-987 Mbps

Normal mode:
With previous change: UDP DL-1380 Mbps
W/O previous change:  UDP DL-1310 Mbps

Also attached the aqm status.

With previous change:

tid ac backlog-bytes backlog-packets new-flows drops marks overlimit 
collisions tx-bytes tx-packets flags
0 2 0 0 5342229 0 0 0 0 3867657029 5342229 0x0(RUN)
1 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
2 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
3 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
4 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
5 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
6 0 0 0 2 0 0 0 0 144 2 0x0(RUN)
7 0 0 0 2 0 0 0 0 282 2 0x0(RUN)
8 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
9 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
10 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
11 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
12 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
13 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
14 0 0 0 0 0 0 0 0 0 0 0x0(RUN)
15 0 0 0 0 0 0 0 0 0 0 0x0(RUN)

we see no difference between tx-packets and new-flows.

W/O previous change:

tid ac backlog-bytes backlog-packets new-flows drops marks overlimit 
collisions tx-bytes tx-packets flags
0 2 0 0 2233059 3 0 9236 12 1159661783 6380867 0x0(RUN)
1 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
2 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
3 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
4 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
5 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
6 0 0 0 1 0 0 0 0 144 2 0x0(RUN)
7 0 0 0 1 0 0 0 0 282 2 0x0(RUN)
8 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
9 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
10 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
11 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
12 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
13 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
14 0 0 0 0 0 0 0 0 0 0 0x0(RUN)
15 0 0 0 0 0 0 0 0 0 0 0x0(RUN)

new-flows are roughly one-third of the total tx-packets.

> 
> --
> Erik
Erik Stromdahl April 16, 2019, 6:54 p.m. UTC | #5
On 4/1/19 1:05 PM, Toke Høiland-Jørgensen wrote:
> Erik Stromdahl <erik.stromdahl@gmail.com> writes:
> 
>> Iterating the TX queue and thereby dequeuing all available packets in the
>> queue could result in performance penalties on some SMP systems.
>>
>> The reason for this is most likely that the per-ac lock (active_txq_lock)
>> in mac80211 will be held by the CPU iterating the current queue.
>>
>> This will lock up other CPUs trying to push new messages on the TX
>> queue.
>>
>> Instead of iterating the queue we fetch just one packet at the time,
>> resulting in minimal starvation of the other CPUs.
> 
> Did you test this with Felix' patches reducing the time the lock is held
> in mac80211?
> 
> -Toke
> 
Hi Toke,

I am not aware of these patches. Can you please point them out for me?

--
Erik
Toke Høiland-Jørgensen April 16, 2019, 7:07 p.m. UTC | #6
Erik Stromdahl <erik.stromdahl@gmail.com> writes:

> On 4/1/19 1:05 PM, Toke Høiland-Jørgensen wrote:
>> Erik Stromdahl <erik.stromdahl@gmail.com> writes:
>> 
>>> Iterating the TX queue and thereby dequeuing all available packets in the
>>> queue could result in performance penalties on some SMP systems.
>>>
>>> The reason for this is most likely that the per-ac lock (active_txq_lock)
>>> in mac80211 will be held by the CPU iterating the current queue.
>>>
>>> This will lock up other CPUs trying to push new messages on the TX
>>> queue.
>>>
>>> Instead of iterating the queue we fetch just one packet at the time,
>>> resulting in minimal starvation of the other CPUs.
>> 
>> Did you test this with Felix' patches reducing the time the lock is held
>> in mac80211?
>> 
>> -Toke
>> 
> Hi Toke,
>
> I am not aware of these patches. Can you please point them out for me?

They've already been merged. Commits dcec1d9bc8a7 and 7ef769459f14 in
mac80211-next :)

-Toke
Erik Stromdahl April 17, 2019, 1:29 p.m. UTC | #7
On 4/16/19 9:07 PM, Toke Høiland-Jørgensen wrote:
> Erik Stromdahl <erik.stromdahl@gmail.com> writes:
> 
>> On 4/1/19 1:05 PM, Toke Høiland-Jørgensen wrote:
>>> Erik Stromdahl <erik.stromdahl@gmail.com> writes:
>>>
>>>> Iterating the TX queue and thereby dequeuing all available packets in the
>>>> queue could result in performance penalties on some SMP systems.
>>>>
>>>> The reason for this is most likely that the per-ac lock (active_txq_lock)
>>>> in mac80211 will be held by the CPU iterating the current queue.
>>>>
>>>> This will lock up other CPUs trying to push new messages on the TX
>>>> queue.
>>>>
>>>> Instead of iterating the queue we fetch just one packet at the time,
>>>> resulting in minimal starvation of the other CPUs.
>>>
>>> Did you test this with Felix' patches reducing the time the lock is held
>>> in mac80211?
>>>
>>> -Toke
>>>
>> Hi Toke,
>>
>> I am not aware of these patches. Can you please point them out for me?
> 
> They've already been merged. Commits dcec1d9bc8a7 and 7ef769459f14 in
> mac80211-next :)
> 
> -Toke
> 

I see. I am using the ath tree and I couldn't find them there.
I can cherry-pick them to my own tree and try them out
(or wait until Kalle updates ath.git).

--
Erik
Kalle Valo April 26, 2019, 7:07 a.m. UTC | #8
Erik Stromdahl <erik.stromdahl@gmail.com> writes:

> On 4/16/19 9:07 PM, Toke Høiland-Jørgensen wrote:
>> Erik Stromdahl <erik.stromdahl@gmail.com> writes:
>>
>>> On 4/1/19 1:05 PM, Toke Høiland-Jørgensen wrote:
>>>> Erik Stromdahl <erik.stromdahl@gmail.com> writes:
>>>>
>>>>> Iterating the TX queue and thereby dequeuing all available packets in the
>>>>> queue could result in performance penalties on some SMP systems.
>>>>>
>>>>> The reason for this is most likely that the per-ac lock (active_txq_lock)
>>>>> in mac80211 will be held by the CPU iterating the current queue.
>>>>>
>>>>> This will lock up other CPUs trying to push new messages on the TX
>>>>> queue.
>>>>>
>>>>> Instead of iterating the queue we fetch just one packet at the time,
>>>>> resulting in minimal starvation of the other CPUs.
>>>>
>>>> Did you test this with Felix' patches reducing the time the lock is held
>>>> in mac80211?
>>>>
>>>> -Toke
>>>>
>>> Hi Toke,
>>>
>>> I am not aware of these patches. Can you please point them out for me?
>>
>> They've already been merged. Commits dcec1d9bc8a7 and 7ef769459f14 in
>> mac80211-next :)
>>
>> -Toke
>>
>
> I see. I am using the ath tree and I couldn't find them there.
> I can cherry-pick them to my own tree and try them out
> (or wait until Kalle updates ath.git).

It will take a while before these commits trickle down to ath-next
branch, most likely after v5.2-rc1 is released.
Kalle Valo Sept. 25, 2019, 5:41 a.m. UTC | #9
Erik Stromdahl <erik.stromdahl@gmail.com> wrote:

> Iterating the TX queue and thereby dequeuing all available packets in the
> queue could result in performance penalties on some SMP systems.
> 
> The reason for this is most likely that the per-ac lock (active_txq_lock)
> in mac80211 will be held by the CPU iterating the current queue.
> 
> This will lock up other CPUs trying to push new messages on the TX
> queue.
> 
> Instead of iterating the queue we fetch just one packet at the time,
> resulting in minimal starvation of the other CPUs.
> 
> Reported-by: Yibo Zhao <yiboz@codeaurora.org>
> Signed-off-by: Erik Stromdahl <erik.stromdahl@gmail.com>

Like others, I'm not convinced about this either. To me it looks like a quick
workaround instead of properly investigating, and fixing, the root cause. To my
understanding the throughput dip was caused by this commit:

e3148cc5fecf ath10k: fix return value check in wake_tx_q op

So to me it feels like the issue has been there all along, just hidden, and the
fix above just exposed it.
diff mbox series

Patch

diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
index b73c23d4ce86..c9e700b844f8 100644
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -4356,7 +4356,6 @@  static void ath10k_mac_op_wake_tx_queue(struct ieee80211_hw *hw,
 					struct ieee80211_txq *txq)
 {
 	struct ath10k *ar = hw->priv;
-	int ret;
 	u8 ac;
 
 	ath10k_htt_tx_txq_update(hw, txq);
@@ -4369,11 +4368,9 @@  static void ath10k_mac_op_wake_tx_queue(struct ieee80211_hw *hw,
 	if (!txq)
 		goto out;
 
-	while (ath10k_mac_tx_can_push(hw, txq)) {
-		ret = ath10k_mac_tx_push_txq(hw, txq);
-		if (ret < 0)
-			break;
-	}
+	if (ath10k_mac_tx_can_push(hw, txq))
+		ath10k_mac_tx_push_txq(hw, txq);
+
 	ieee80211_return_txq(hw, txq);
 	ath10k_htt_tx_txq_update(hw, txq);
 out: