Message ID | 20190130123615.592071954@linutronix.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | genirq, proc: Speedup /proc/stat interrupt statistics | expand |
On Wed, Jan 30, 2019 at 01:31:32PM +0100, Thomas Gleixner wrote: > +static void show_irq_gap(struct seq_file *p, int gap) > +{ > + static const char zeros[] = " 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0"; > + > + while (gap > 0) { > + int inc = min_t(int, gap, ARRAY_SIZE(zeros) / 2); > + > + seq_write(p, zeros, 2 * inc); > + gap -= inc; > + } > +} > + > +static void show_all_irqs(struct seq_file *p) > +{ > + int i, next = 0; > + > + for_each_active_irq(i) { > + show_irq_gap(p, i - next); > + seq_put_decimal_ull(p, " ", kstat_irqs_usr(i)); > + next = i + 1; > + } > + show_irq_gap(p, nr_irqs - next); > +} Every signed int can and should be unsigned int in this patch.
On Thu, 31 Jan 2019, Alexey Dobriyan wrote: > On Wed, Jan 30, 2019 at 01:31:32PM +0100, Thomas Gleixner wrote: > > +static void show_irq_gap(struct seq_file *p, int gap) > > +{ > > + static const char zeros[] = " 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0"; > > + > > + while (gap > 0) { > > + int inc = min_t(int, gap, ARRAY_SIZE(zeros) / 2); > > + > > + seq_write(p, zeros, 2 * inc); > > + gap -= inc; > > + } > > +} > > + > > +static void show_all_irqs(struct seq_file *p) > > +{ > > + int i, next = 0; > > + > > + for_each_active_irq(i) { > > + show_irq_gap(p, i - next); > > + seq_put_decimal_ull(p, " ", kstat_irqs_usr(i)); > > + next = i + 1; > > + } > > + show_irq_gap(p, nr_irqs - next); > > +} > > Every signed int can and should be unsigned int in this patch. > Indeed.
--- a/fs/proc/stat.c +++ b/fs/proc/stat.c @@ -79,6 +79,30 @@ static u64 get_iowait_time(int cpu) #endif +static void show_irq_gap(struct seq_file *p, int gap) +{ + static const char zeros[] = " 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0"; + + while (gap > 0) { + int inc = min_t(int, gap, ARRAY_SIZE(zeros) / 2); + + seq_write(p, zeros, 2 * inc); + gap -= inc; + } +} + +static void show_all_irqs(struct seq_file *p) +{ + int i, next = 0; + + for_each_active_irq(i) { + show_irq_gap(p, i - next); + seq_put_decimal_ull(p, " ", kstat_irqs_usr(i)); + next = i + 1; + } + show_irq_gap(p, nr_irqs - next); +} + static int show_stat(struct seq_file *p, void *v) { int i, j; @@ -156,9 +180,7 @@ static int show_stat(struct seq_file *p, } seq_put_decimal_ull(p, "intr ", (unsigned long long)sum); - /* sum again ? it could be updated? */ - for_each_irq_nr(j) - seq_put_decimal_ull(p, " ", kstat_irqs_usr(j)); + show_all_irqs(p); seq_printf(p, "\nctxt %llu\n"
Waiman reported that on large systems with a large amount of interrupts the readout of /proc/stat takes a long time to sum up the interrupt statistics. In principle this is not a problem. but for unknown reasons some enterprise quality software reads /proc/stat with a high frequency. The reason for this is that interrupt statistics are accounted per cpu. So the /proc/stat logic has to sum up the interrupt stats for each interrupt. The interrupt core provides now a per interrupt summary counter which can be used to avoid the summation loops completely except for interrupts marked PER_CPU which are only a small fraction of the interrupt space if at all. Another simplification is to iterate only over the active interrupts and skip the potentially large gaps in the interrupt number space and just print zeros for the gaps without going into the interrupt core in the first place. Reported-by: Waiman Long <longman@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- fs/proc/stat.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-)