Message ID | 20220926220457.1517120-1-Jason@zx2c4.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [v2] random: use immediate per-cpu timer rather than workqueue for mixing fast pool | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
From: Jason A. Donenfeld > Sent: 26 September 2022 23:05 > > Previously, the fast pool was dumped into the main pool peroidically in > the fast pool's hard IRQ handler. This worked fine and there weren't > problems with it, until RT came around. Since RT converts spinlocks into > sleeping locks, problems cropped up. Rather than switching to raw > spinlocks, the RT developers preferred we make the transformation from > originally doing: > > do_some_stuff() > spin_lock() > do_some_other_stuff() > spin_unlock() > > to doing: > > do_some_stuff() > queue_work_on(some_other_stuff_worker) > > This is an ordinary pattern done all over the kernel. However, Sherry > noticed a 10% performance regression in qperf TCP over a 40gbps > InfiniBand card. Quoting her message: > > > MT27500 Family [ConnectX-3] cards: > > Infiniband device 'mlx4_0' port 1 status: > > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1 > > base lid: 0x6 > > sm lid: 0x1 > > state: 4: ACTIVE > > phys state: 5: LinkUp > > rate: 40 Gb/sec (4X QDR) > > link_layer: InfiniBand > > > > Cards are configured with IP addresses on private subnet for IPoIB > > performance testing. > > Regression identified in this bug is in TCP latency in this stack as reported > > by qperf tcp_lat metric: > > > > We have one system listen as a qperf server: > > [root@yourQperfServer ~]# qperf > > > > Have the other system connect to qperf server as a client (in this > > case, it’s X7 server with Mellanox card): > > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 - > oo msg_size:4K:1024K:*2 tcp_lat > > Rather than incur the scheduling latency from queue_work_on, we can > instead switch to running on the next timer tick, on the same core, > deferrably so. This also batches things a bit more -- once per jiffy -- > which is probably okay now that mix_interrupt_randomness() can credit > multiple bits at once. It still puts a bit of pressure on fast_mix(), > but hopefully that's acceptable. I though NOHZ systems didn't take a timer interrupt every 'jiffy'. If that is true what actually happens? David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Tue, Sep 27, 2022 at 07:41:52AM +0000, David Laight wrote: > From: Jason A. Donenfeld > > Sent: 26 September 2022 23:05 > > > > Previously, the fast pool was dumped into the main pool peroidically in > > the fast pool's hard IRQ handler. This worked fine and there weren't > > problems with it, until RT came around. Since RT converts spinlocks into > > sleeping locks, problems cropped up. Rather than switching to raw > > spinlocks, the RT developers preferred we make the transformation from > > originally doing: > > > > do_some_stuff() > > spin_lock() > > do_some_other_stuff() > > spin_unlock() > > > > to doing: > > > > do_some_stuff() > > queue_work_on(some_other_stuff_worker) > > > > This is an ordinary pattern done all over the kernel. However, Sherry > > noticed a 10% performance regression in qperf TCP over a 40gbps > > InfiniBand card. Quoting her message: > > > > > MT27500 Family [ConnectX-3] cards: > > > Infiniband device 'mlx4_0' port 1 status: > > > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1 > > > base lid: 0x6 > > > sm lid: 0x1 > > > state: 4: ACTIVE > > > phys state: 5: LinkUp > > > rate: 40 Gb/sec (4X QDR) > > > link_layer: InfiniBand > > > > > > Cards are configured with IP addresses on private subnet for IPoIB > > > performance testing. > > > Regression identified in this bug is in TCP latency in this stack as reported > > > by qperf tcp_lat metric: > > > > > > We have one system listen as a qperf server: > > > [root@yourQperfServer ~]# qperf > > > > > > Have the other system connect to qperf server as a client (in this > > > case, it’s X7 server with Mellanox card): > > > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 - > > oo msg_size:4K:1024K:*2 tcp_lat > > > > Rather than incur the scheduling latency from queue_work_on, we can > > instead switch to running on the next timer tick, on the same core, > > deferrably so. This also batches things a bit more -- once per jiffy -- > > which is probably okay now that mix_interrupt_randomness() can credit > > multiple bits at once. It still puts a bit of pressure on fast_mix(), > > but hopefully that's acceptable. > > I though NOHZ systems didn't take a timer interrupt every 'jiffy'. > If that is true what actually happens? The TIMER_DEFERRABLE part of this patch is a mistake; I'm going to make that 0. However, since expires==jiffies, there's no difference. It's still undesirable though. Jason
diff --git a/drivers/char/random.c b/drivers/char/random.c index 1cb53495e8f7..08bb46a50802 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -928,17 +928,20 @@ struct fast_pool { unsigned long pool[4]; unsigned long last; unsigned int count; - struct work_struct mix; + struct timer_list mix; }; +static void mix_interrupt_randomness(struct timer_list *work); + static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { #ifdef CONFIG_64BIT #define FASTMIX_PERM SIPHASH_PERMUTATION - .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 } + .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }, #else #define FASTMIX_PERM HSIPHASH_PERMUTATION - .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 } + .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }, #endif + .mix = __TIMER_INITIALIZER(mix_interrupt_randomness, TIMER_DEFERRABLE) }; /* @@ -980,7 +983,7 @@ int __cold random_online_cpu(unsigned int cpu) } #endif -static void mix_interrupt_randomness(struct work_struct *work) +static void mix_interrupt_randomness(struct timer_list *work) { struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); /* @@ -1034,10 +1037,11 @@ void add_interrupt_randomness(int irq) if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ)) return; - if (unlikely(!fast_pool->mix.func)) - INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); fast_pool->count |= MIX_INFLIGHT; - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); + if (!timer_pending(&fast_pool->mix)) { + fast_pool->mix.expires = jiffies; + add_timer_on(&fast_pool->mix, raw_smp_processor_id()); + } } EXPORT_SYMBOL_GPL(add_interrupt_randomness);