diff mbox series

[v1,3/6] irqchip/gic-v3: support SGI broadcast

Message ID 20241021042218.746659-4-yuzhao@google.com (mailing list archive)
State New
Headers show
Series mm/arm64: re-enable HVO | expand

Commit Message

Yu Zhao Oct. 21, 2024, 4:22 a.m. UTC
GIC v3 and later support SGI broadcast, i.e., the mode that routes
interrupts to all PEs in the system excluding the local CPU.

Supporting this mode can avoid looping through all the remote CPUs
when broadcasting SGIs, especially for systems with 200+ CPUs. The
performance improvement can be measured with the rest of this series
booted with "hugetlb_free_vmemmap=on irqchip.gicv3_pseudo_nmi=1":

  cd /sys/kernel/mm/hugepages/
  echo 600 >hugepages-1048576kB/nr_hugepages
  echo 2048kB >hugepages-1048576kB/demote_size
  perf record -g -- bash -c "echo 600 >hugepages-1048576kB/demote"

         gic_ipi_send_mask()  bash sys time
Before:  38.14%               0m10.513s
After:    0.20%               0m5.132s

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 drivers/irqchip/irq-gic-v3.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

Comments

kernel test robot Oct. 22, 2024, 12:24 a.m. UTC | #1
Hi Yu,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 42f7652d3eb527d03665b09edac47f85fb600924]

url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Zhao/mm-hugetlb_vmemmap-batch-update-PTEs/20241021-122330
base:   42f7652d3eb527d03665b09edac47f85fb600924
patch link:    https://lore.kernel.org/r/20241021042218.746659-4-yuzhao%40google.com
patch subject: [PATCH v1 3/6] irqchip/gic-v3: support SGI broadcast
config: arm-defconfig (https://download.01.org/0day-ci/archive/20241022/202410221026.yJKHaGWA-lkp@intel.com/config)
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241022/202410221026.yJKHaGWA-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202410221026.yJKHaGWA-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/irqchip/irq-gic-v3.c:1401:8: warning: shift count >= width of type [-Wshift-count-overflow]
           val = BIT(ICC_SGI1R_IRQ_ROUTING_MODE_BIT) | (irq << ICC_SGI1R_SGI_ID_SHIFT);
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/vdso/bits.h:7:26: note: expanded from macro 'BIT'
   #define BIT(nr)                 (UL(1) << (nr))
                                          ^  ~~~~
   1 warning generated.


vim +1401 drivers/irqchip/irq-gic-v3.c

  1396	
  1397	static void gic_broadcast_sgi(unsigned int irq)
  1398	{
  1399		u64 val;
  1400	
> 1401		val = BIT(ICC_SGI1R_IRQ_ROUTING_MODE_BIT) | (irq << ICC_SGI1R_SGI_ID_SHIFT);
  1402	
  1403		pr_devel("CPU %d: broadcasting SGI %u\n", smp_processor_id(), irq);
  1404		gic_write_sgi1r(val);
  1405	}
  1406
Marc Zyngier Oct. 22, 2024, 3:03 p.m. UTC | #2
On Mon, 21 Oct 2024 05:22:15 +0100,
Yu Zhao <yuzhao@google.com> wrote:
> 
> GIC v3 and later support SGI broadcast, i.e., the mode that routes
> interrupts to all PEs in the system excluding the local CPU.
> 
> Supporting this mode can avoid looping through all the remote CPUs
> when broadcasting SGIs, especially for systems with 200+ CPUs. The
> performance improvement can be measured with the rest of this series
> booted with "hugetlb_free_vmemmap=on irqchip.gicv3_pseudo_nmi=1":
> 
>   cd /sys/kernel/mm/hugepages/
>   echo 600 >hugepages-1048576kB/nr_hugepages
>   echo 2048kB >hugepages-1048576kB/demote_size
>   perf record -g -- bash -c "echo 600 >hugepages-1048576kB/demote"
> 
>          gic_ipi_send_mask()  bash sys time
> Before:  38.14%               0m10.513s
> After:    0.20%               0m5.132s
> 
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  drivers/irqchip/irq-gic-v3.c | 20 +++++++++++++++++++-
>  1 file changed, 19 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index ce87205e3e82..42c39385e1b9 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -1394,9 +1394,20 @@ static void gic_send_sgi(u64 cluster_id, u16 tlist, unsigned int irq)
>  	gic_write_sgi1r(val);
>  }
>  
> +static void gic_broadcast_sgi(unsigned int irq)
> +{
> +	u64 val;
> +
> +	val = BIT(ICC_SGI1R_IRQ_ROUTING_MODE_BIT) | (irq << ICC_SGI1R_SGI_ID_SHIFT);

As picked up by the test bot, please fix the 32bit build.

> +
> +	pr_devel("CPU %d: broadcasting SGI %u\n", smp_processor_id(), irq);
> +	gic_write_sgi1r(val);
> +}
> +
>  static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
>  {
>  	int cpu;
> +	cpumask_t broadcast;
>  
>  	if (WARN_ON(d->hwirq >= 16))
>  		return;
> @@ -1407,6 +1418,13 @@ static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
>  	 */
>  	dsb(ishst);
>  
> +	cpumask_copy(&broadcast, cpu_present_mask);

Why cpu_present_mask? I'd expect that cpu_online_mask should be the
correct mask to use -- we don't IPI offline CPUs, in general.

> +	cpumask_clear_cpu(smp_processor_id(), &broadcast);
> +	if (cpumask_equal(&broadcast, mask)) {
> +		gic_broadcast_sgi(d->hwirq);
> +		goto done;
> +	}

So the (valid) case where you would IPI *everyone* is not handled as a
fast path? That seems a missed opportunity.

This also seem an like expensive way to do it. How about something
like:

	int mcnt = cpumask_weight(mask);
	int ocnt = cpumask_weight(cpu_online_mask);
	if (mcnt == ocnt)  {
		/* Broadcast to all CPUs including self */
	} else if (mcnt == (ocnt - 1) &&
		   !cpumask_test_cpu(smp_processor_id(), mask)) {
		/* Broadcast to all but self */
	}

which avoids the copy+update_full compare.

Thanks,

	M.
Yu Zhao Oct. 25, 2024, 5:07 a.m. UTC | #3
Hi Marc,

On Tue, Oct 22, 2024 at 9:03 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Mon, 21 Oct 2024 05:22:15 +0100,
> Yu Zhao <yuzhao@google.com> wrote:
> >
> > GIC v3 and later support SGI broadcast, i.e., the mode that routes
> > interrupts to all PEs in the system excluding the local CPU.
> >
> > Supporting this mode can avoid looping through all the remote CPUs
> > when broadcasting SGIs, especially for systems with 200+ CPUs. The
> > performance improvement can be measured with the rest of this series
> > booted with "hugetlb_free_vmemmap=on irqchip.gicv3_pseudo_nmi=1":
> >
> >   cd /sys/kernel/mm/hugepages/
> >   echo 600 >hugepages-1048576kB/nr_hugepages
> >   echo 2048kB >hugepages-1048576kB/demote_size
> >   perf record -g -- bash -c "echo 600 >hugepages-1048576kB/demote"
> >
> >          gic_ipi_send_mask()  bash sys time
> > Before:  38.14%               0m10.513s
> > After:    0.20%               0m5.132s
> >
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> >  drivers/irqchip/irq-gic-v3.c | 20 +++++++++++++++++++-
> >  1 file changed, 19 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > index ce87205e3e82..42c39385e1b9 100644
> > --- a/drivers/irqchip/irq-gic-v3.c
> > +++ b/drivers/irqchip/irq-gic-v3.c
> > @@ -1394,9 +1394,20 @@ static void gic_send_sgi(u64 cluster_id, u16 tlist, unsigned int irq)
> >       gic_write_sgi1r(val);
> >  }
> >
> > +static void gic_broadcast_sgi(unsigned int irq)
> > +{
> > +     u64 val;
> > +
> > +     val = BIT(ICC_SGI1R_IRQ_ROUTING_MODE_BIT) | (irq << ICC_SGI1R_SGI_ID_SHIFT);
>
> As picked up by the test bot, please fix the 32bit build.

Will do.

> > +
> > +     pr_devel("CPU %d: broadcasting SGI %u\n", smp_processor_id(), irq);
> > +     gic_write_sgi1r(val);
> > +}
> > +
> >  static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
> >  {
> >       int cpu;
> > +     cpumask_t broadcast;
> >
> >       if (WARN_ON(d->hwirq >= 16))
> >               return;
> > @@ -1407,6 +1418,13 @@ static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
> >        */
> >       dsb(ishst);
> >
> > +     cpumask_copy(&broadcast, cpu_present_mask);
>
> Why cpu_present_mask? I'd expect that cpu_online_mask should be the
> correct mask to use -- we don't IPI offline CPUs, in general.

This is exactly because "we don't IPI offline CPUs, in general",
assuming "we" means the kernel, not GIC.

My interpretation of what the GIC spec says ("0b1: Interrupts routed
to all PEs in the system, excluding self") is that it broadcasts IPIs to
"cpu_present_mask" (minus the local one). So if the kernel uses
"cpu_online_mask" here, GIC would send IPIs to offline CPUs
(cpu_present_mask ^ cpu_online_mask), which I don't know whether it's
a defined behavior.

But if you actually meant GIC doesn't IPI offline CPUs, then yes, here
the kernel should use "cpu_online_mask".

> > +     cpumask_clear_cpu(smp_processor_id(), &broadcast);
> > +     if (cpumask_equal(&broadcast, mask)) {
> > +             gic_broadcast_sgi(d->hwirq);
> > +             goto done;
> > +     }
>
> So the (valid) case where you would IPI *everyone* is not handled as a
> fast path? That seems a missed opportunity.

You are right: it should handle that case.

> This also seem an like expensive way to do it. How about something
> like:
>
>         int mcnt = cpumask_weight(mask);
>         int ocnt = cpumask_weight(cpu_online_mask);
>         if (mcnt == ocnt)  {
>                 /* Broadcast to all CPUs including self */

Does the comment mean the following two steps?
1. Broadcasting to everyone else.
2. Sending to self.

My understanding of the "Interrupt Routing Mode" is that it can't
broadcast to all CPUs including self, and therefore we need the above
two steps, which still can be a lot faster. Is my understanding
correct?

>         } else if (mcnt == (ocnt - 1) &&
>                    !cpumask_test_cpu(smp_processor_id(), mask)) {
>                 /* Broadcast to all but self */
>         }
>
> which avoids the copy+update_full compare.

Thank you.
Marc Zyngier Oct. 25, 2024, 4:14 p.m. UTC | #4
On Fri, 25 Oct 2024 06:07:45 +0100,
Yu Zhao <yuzhao@google.com> wrote:
> 
> Hi Marc,
> 
> On Tue, Oct 22, 2024 at 9:03 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Mon, 21 Oct 2024 05:22:15 +0100,
> > Yu Zhao <yuzhao@google.com> wrote:
> > >
> > > @@ -1407,6 +1418,13 @@ static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
> > >        */
> > >       dsb(ishst);
> > >
> > > +     cpumask_copy(&broadcast, cpu_present_mask);
> >
> > Why cpu_present_mask? I'd expect that cpu_online_mask should be the
> > correct mask to use -- we don't IPI offline CPUs, in general.
> 
> This is exactly because "we don't IPI offline CPUs, in general",
> assuming "we" means the kernel, not GIC.
> 
> My interpretation of what the GIC spec says ("0b1: Interrupts routed
> to all PEs in the system, excluding self") is that it broadcasts IPIs to
> "cpu_present_mask" (minus the local one). So if the kernel uses
> "cpu_online_mask" here, GIC would send IPIs to offline CPUs
> (cpu_present_mask ^ cpu_online_mask), which I don't know whether it's
> a defined behavior.

Offline CPUs are not known to the kernel. Most likely, they are either
powered off, or spending quality time in Secure or Realm mode. Either
way, this is none of our business.

Your approach would make also the kernel perform pretty inconsistently
depending on whether CPUs are offline and not.

> 
> But if you actually meant GIC doesn't IPI offline CPUs, then yes, here
> the kernel should use "cpu_online_mask".
> 
> > > +     cpumask_clear_cpu(smp_processor_id(), &broadcast);
> > > +     if (cpumask_equal(&broadcast, mask)) {
> > > +             gic_broadcast_sgi(d->hwirq);
> > > +             goto done;
> > > +     }
> >
> > So the (valid) case where you would IPI *everyone* is not handled as a
> > fast path? That seems a missed opportunity.
> 
> You are right: it should handle that case.
> 
> > This also seem an like expensive way to do it. How about something
> > like:
> >
> >         int mcnt = cpumask_weight(mask);
> >         int ocnt = cpumask_weight(cpu_online_mask);
> >         if (mcnt == ocnt)  {
> >                 /* Broadcast to all CPUs including self */
> 
> Does the comment mean the following two steps?
> 1. Broadcasting to everyone else.
> 2. Sending to self.

Correct.

> My understanding of the "Interrupt Routing Mode" is that it can't
> broadcast to all CPUs including self, and therefore we need the above
> two steps, which still can be a lot faster. Is my understanding
> correct?

Yes.

Thanks,

	M.
Yu Zhao Oct. 25, 2024, 5:31 p.m. UTC | #5
On Fri, Oct 25, 2024 at 10:15 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 25 Oct 2024 06:07:45 +0100,
> Yu Zhao <yuzhao@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Tue, Oct 22, 2024 at 9:03 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Mon, 21 Oct 2024 05:22:15 +0100,
> > > Yu Zhao <yuzhao@google.com> wrote:
> > > >
> > > > @@ -1407,6 +1418,13 @@ static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
> > > >        */
> > > >       dsb(ishst);
> > > >
> > > > +     cpumask_copy(&broadcast, cpu_present_mask);
> > >
> > > Why cpu_present_mask? I'd expect that cpu_online_mask should be the
> > > correct mask to use -- we don't IPI offline CPUs, in general.
> >
> > This is exactly because "we don't IPI offline CPUs, in general",
> > assuming "we" means the kernel, not GIC.
> >
> > My interpretation of what the GIC spec says ("0b1: Interrupts routed
> > to all PEs in the system, excluding self") is that it broadcasts IPIs to
> > "cpu_present_mask" (minus the local one). So if the kernel uses
> > "cpu_online_mask" here, GIC would send IPIs to offline CPUs
> > (cpu_present_mask ^ cpu_online_mask), which I don't know whether it's
> > a defined behavior.

Thanks for clarifying.

> Offline CPUs are not known to the kernel.

I assume it wouldn't matter to firmware either, correct? IOW, we
wouldn't cause firmware any trouble by letting GIC send IPIs to
(cpu_present_mask ^ cpu_online_mask), assuming those two masks can be
different on arm64 when hotplug is enabled?

> Most likely, they are either
> powered off, or spending quality time in Secure or Realm mode. Either
> way, this is none of our business.
>
> Your approach would make also the kernel perform pretty inconsistently
> depending on whether CPUs are offline and not.
>
> >
> > But if you actually meant GIC doesn't IPI offline CPUs, then yes, here
> > the kernel should use "cpu_online_mask".
> >
> > > > +     cpumask_clear_cpu(smp_processor_id(), &broadcast);
> > > > +     if (cpumask_equal(&broadcast, mask)) {
> > > > +             gic_broadcast_sgi(d->hwirq);
> > > > +             goto done;
> > > > +     }
> > >
> > > So the (valid) case where you would IPI *everyone* is not handled as a
> > > fast path? That seems a missed opportunity.
> >
> > You are right: it should handle that case.
> >
> > > This also seem an like expensive way to do it. How about something
> > > like:
> > >
> > >         int mcnt = cpumask_weight(mask);
> > >         int ocnt = cpumask_weight(cpu_online_mask);
> > >         if (mcnt == ocnt)  {
> > >                 /* Broadcast to all CPUs including self */
> >
> > Does the comment mean the following two steps?
> > 1. Broadcasting to everyone else.
> > 2. Sending to self.
>
> Correct.
>
> > My understanding of the "Interrupt Routing Mode" is that it can't
> > broadcast to all CPUs including self, and therefore we need the above
> > two steps, which still can be a lot faster. Is my understanding
> > correct?
>
> Yes.
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
Marc Zyngier Oct. 29, 2024, 7:02 p.m. UTC | #6
On Fri, 25 Oct 2024 18:31:01 +0100,
Yu Zhao <yuzhao@google.com> wrote:
> 
> On Fri, Oct 25, 2024 at 10:15 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Fri, 25 Oct 2024 06:07:45 +0100,
> > Yu Zhao <yuzhao@google.com> wrote:
> > >
> > > Hi Marc,
> > >
> > > On Tue, Oct 22, 2024 at 9:03 AM Marc Zyngier <maz@kernel.org> wrote:
> > > >
> > > > On Mon, 21 Oct 2024 05:22:15 +0100,
> > > > Yu Zhao <yuzhao@google.com> wrote:
> > > > >
> > > > > @@ -1407,6 +1418,13 @@ static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
> > > > >        */
> > > > >       dsb(ishst);
> > > > >
> > > > > +     cpumask_copy(&broadcast, cpu_present_mask);
> > > >
> > > > Why cpu_present_mask? I'd expect that cpu_online_mask should be the
> > > > correct mask to use -- we don't IPI offline CPUs, in general.
> > >
> > > This is exactly because "we don't IPI offline CPUs, in general",
> > > assuming "we" means the kernel, not GIC.
> > >
> > > My interpretation of what the GIC spec says ("0b1: Interrupts routed
> > > to all PEs in the system, excluding self") is that it broadcasts IPIs to
> > > "cpu_present_mask" (minus the local one). So if the kernel uses
> > > "cpu_online_mask" here, GIC would send IPIs to offline CPUs
> > > (cpu_present_mask ^ cpu_online_mask), which I don't know whether it's
> > > a defined behavior.
> 
> Thanks for clarifying.
> 
> > Offline CPUs are not known to the kernel.
> 
> I assume it wouldn't matter to firmware either, correct? IOW, we

Firmware is on the secure side of the stack.

> wouldn't cause firmware any trouble by letting GIC send IPIs to
> (cpu_present_mask ^ cpu_online_mask), assuming those two masks can be
> different on arm64 when hotplug is enabled?

You can't send SGIs from non-secure to secure using ICC_SGI1R_EL1. You
would need to use ICC_ASGI1R_EL1, and have secure to allow such
brokenness via a configuration of GICR_NSACR. Linux doesn't use the
former, and no sane software touches the latter.

	M.
Yu Zhao Oct. 29, 2024, 7:53 p.m. UTC | #7
On Tue, Oct 29, 2024 at 1:02 PM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 25 Oct 2024 18:31:01 +0100,
> Yu Zhao <yuzhao@google.com> wrote:
> >
> > On Fri, Oct 25, 2024 at 10:15 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Fri, 25 Oct 2024 06:07:45 +0100,
> > > Yu Zhao <yuzhao@google.com> wrote:
> > > >
> > > > Hi Marc,
> > > >
> > > > On Tue, Oct 22, 2024 at 9:03 AM Marc Zyngier <maz@kernel.org> wrote:
> > > > >
> > > > > On Mon, 21 Oct 2024 05:22:15 +0100,
> > > > > Yu Zhao <yuzhao@google.com> wrote:
> > > > > >
> > > > > > @@ -1407,6 +1418,13 @@ static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
> > > > > >        */
> > > > > >       dsb(ishst);
> > > > > >
> > > > > > +     cpumask_copy(&broadcast, cpu_present_mask);
> > > > >
> > > > > Why cpu_present_mask? I'd expect that cpu_online_mask should be the
> > > > > correct mask to use -- we don't IPI offline CPUs, in general.
> > > >
> > > > This is exactly because "we don't IPI offline CPUs, in general",
> > > > assuming "we" means the kernel, not GIC.
> > > >
> > > > My interpretation of what the GIC spec says ("0b1: Interrupts routed
> > > > to all PEs in the system, excluding self") is that it broadcasts IPIs to
> > > > "cpu_present_mask" (minus the local one). So if the kernel uses
> > > > "cpu_online_mask" here, GIC would send IPIs to offline CPUs
> > > > (cpu_present_mask ^ cpu_online_mask), which I don't know whether it's
> > > > a defined behavior.
> >
> > Thanks for clarifying.
> >
> > > Offline CPUs are not known to the kernel.
> >
> > I assume it wouldn't matter to firmware either, correct? IOW, we
>
> Firmware is on the secure side of the stack.
>
> > wouldn't cause firmware any trouble by letting GIC send IPIs to
> > (cpu_present_mask ^ cpu_online_mask), assuming those two masks can be
> > different on arm64 when hotplug is enabled?
>
> You can't send SGIs from non-secure to secure using ICC_SGI1R_EL1.

Right, this makes sense now.

> You
> would need to use ICC_ASGI1R_EL1, and have secure to allow such
> brokenness via a configuration of GICR_NSACR. Linux doesn't use the
> former, and no sane software touches the latter.
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
diff mbox series

Patch

diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index ce87205e3e82..42c39385e1b9 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -1394,9 +1394,20 @@  static void gic_send_sgi(u64 cluster_id, u16 tlist, unsigned int irq)
 	gic_write_sgi1r(val);
 }
 
+static void gic_broadcast_sgi(unsigned int irq)
+{
+	u64 val;
+
+	val = BIT(ICC_SGI1R_IRQ_ROUTING_MODE_BIT) | (irq << ICC_SGI1R_SGI_ID_SHIFT);
+
+	pr_devel("CPU %d: broadcasting SGI %u\n", smp_processor_id(), irq);
+	gic_write_sgi1r(val);
+}
+
 static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
 {
 	int cpu;
+	cpumask_t broadcast;
 
 	if (WARN_ON(d->hwirq >= 16))
 		return;
@@ -1407,6 +1418,13 @@  static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
 	 */
 	dsb(ishst);
 
+	cpumask_copy(&broadcast, cpu_present_mask);
+	cpumask_clear_cpu(smp_processor_id(), &broadcast);
+	if (cpumask_equal(&broadcast, mask)) {
+		gic_broadcast_sgi(d->hwirq);
+		goto done;
+	}
+
 	for_each_cpu(cpu, mask) {
 		u64 cluster_id = MPIDR_TO_SGI_CLUSTER_ID(gic_cpu_to_affinity(cpu));
 		u16 tlist;
@@ -1414,7 +1432,7 @@  static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
 		tlist = gic_compute_target_list(&cpu, mask, cluster_id);
 		gic_send_sgi(cluster_id, tlist, d->hwirq);
 	}
-
+done:
 	/* Force the above writes to ICC_SGI1R_EL1 to be executed */
 	isb();
 }