Message ID | 20230123124042.718743-4-mark.rutland@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n | expand |
On Mon, 23 Jan 2023 12:40:41 +0000, Mark Rutland <mark.rutland@arm.com> wrote: > > When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR > value to determine whether to signal an IRQ to a PE, and consequently > after a change to the PMR value, a DSB SY may be required to ensure that > interrupts are signalled to a CPU in finite time. When PMHE == 0b0, > interrupts are always signalled to the relevant PE, and all masking > occurs locally, without requiring a DSB SY. > > Since commit: > > f226650494c6aa87 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear") > > ... we handle this dynamically: in most cases a static key is used to > determine whether to issue a DSB SY, but the entry code must read from > ICC_CTLR_EL1 as static keys aren't accessible from plain assembly. > > It would be much nicer to use an alternative instruction sequence for > the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the > entry code, and for most other code this will result in simpler code > generation with fewer instructions and fewer branches. > > This patch adds a new ARM64_HAS_GIC_PRIO_NO_PMHE cpucap which is only > set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in use). > This allows us to replace the existing users of the `gic_pmr_sync` > static key with alternative sequences which default to a DSB SY and are > relaxed to a NOP when PMHE is not in use. I personally find the "negative capability" pretty annoying, specially considering that hardly anyone uses PMHE. The way the code reads with this patch, it is always some sort of double negation. Can't the DSB be patched-in instead, making the PMHE cap a "positive" one? It shouldn't affect interrupt distribution as long as the patching occurs before we take interrupts. For modules, the patching always occurs before we can run the module, so this should be equally safe. The patch otherwise looks OK to me. M.
On Mon, Jan 23, 2023 at 01:23:31PM +0000, Marc Zyngier wrote: > On Mon, 23 Jan 2023 12:40:41 +0000, > Mark Rutland <mark.rutland@arm.com> wrote: > > > > When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR > > value to determine whether to signal an IRQ to a PE, and consequently > > after a change to the PMR value, a DSB SY may be required to ensure that > > interrupts are signalled to a CPU in finite time. When PMHE == 0b0, > > interrupts are always signalled to the relevant PE, and all masking > > occurs locally, without requiring a DSB SY. > > > > Since commit: > > > > f226650494c6aa87 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear") > > > > ... we handle this dynamically: in most cases a static key is used to > > determine whether to issue a DSB SY, but the entry code must read from > > ICC_CTLR_EL1 as static keys aren't accessible from plain assembly. > > > > It would be much nicer to use an alternative instruction sequence for > > the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the > > entry code, and for most other code this will result in simpler code > > generation with fewer instructions and fewer branches. > > > > This patch adds a new ARM64_HAS_GIC_PRIO_NO_PMHE cpucap which is only > > set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in use). > > This allows us to replace the existing users of the `gic_pmr_sync` > > static key with alternative sequences which default to a DSB SY and are > > relaxed to a NOP when PMHE is not in use. > > I personally find the "negative capability" pretty annoying, specially > considering that hardly anyone uses PMHE. The way the code reads with > this patch, it is always some sort of double negation. For the polarity and double-negation, I could rename this to ARM64_HAS_GIC_PRIO_RELAXED_SYNC, if that helps? > Can't the DSB be patched-in instead, making the PMHE cap a "positive" > one? We could; my rationale for doing it this way is that we can use the common NOP patching helper, and avoid generating a copy of the `DSB SY` instruction per pmr_sync() call (which gets generated near to the call and never gets free, unlike the alt_instr entries), which adds up quickly when using pseudo-NMIs. > It shouldn't affect interrupt distribution as long as the > patching occurs before we take interrupts. For modules, the patching always > occurs before we can run the module, so this should be equally safe. I agree it shouldn't matter either way -- until we've patched in ARM64_HAS_GIC_PRIO_MASKING alternatives it's not going to matter. > The patch otherwise looks OK to me. Thanks! Do you have a preference between the ARM64_HAS_GIC_PRIO_RELAXED_SYNC or ARM64_HAS_GIC_PRIO_PMHE options above? Thanks, Mark.
On Mon, 23 Jan 2023 14:30:17 +0000, Mark Rutland <mark.rutland@arm.com> wrote: > > On Mon, Jan 23, 2023 at 01:23:31PM +0000, Marc Zyngier wrote: > > On Mon, 23 Jan 2023 12:40:41 +0000, > > Mark Rutland <mark.rutland@arm.com> wrote: > > > > > > When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR > > > value to determine whether to signal an IRQ to a PE, and consequently > > > after a change to the PMR value, a DSB SY may be required to ensure that > > > interrupts are signalled to a CPU in finite time. When PMHE == 0b0, > > > interrupts are always signalled to the relevant PE, and all masking > > > occurs locally, without requiring a DSB SY. > > > > > > Since commit: > > > > > > f226650494c6aa87 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear") > > > > > > ... we handle this dynamically: in most cases a static key is used to > > > determine whether to issue a DSB SY, but the entry code must read from > > > ICC_CTLR_EL1 as static keys aren't accessible from plain assembly. > > > > > > It would be much nicer to use an alternative instruction sequence for > > > the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the > > > entry code, and for most other code this will result in simpler code > > > generation with fewer instructions and fewer branches. > > > > > > This patch adds a new ARM64_HAS_GIC_PRIO_NO_PMHE cpucap which is only > > > set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in use). > > > This allows us to replace the existing users of the `gic_pmr_sync` > > > static key with alternative sequences which default to a DSB SY and are > > > relaxed to a NOP when PMHE is not in use. > > > > I personally find the "negative capability" pretty annoying, specially > > considering that hardly anyone uses PMHE. The way the code reads with > > this patch, it is always some sort of double negation. > > For the polarity and double-negation, I could rename this to > ARM64_HAS_GIC_PRIO_RELAXED_SYNC, if that helps? It certainly reads much better. > > > Can't the DSB be patched-in instead, making the PMHE cap a "positive" > > one? > > We could; my rationale for doing it this way is that we can use the common NOP > patching helper, and avoid generating a copy of the `DSB SY` instruction per > pmr_sync() call (which gets generated near to the call and never gets free, > unlike the alt_instr entries), which adds up quickly when using pseudo-NMIs. Having an equivalent to alt_cb_patch_nops to patch in "DSB SY" would result in similar gains, only less reusable... > > > It shouldn't affect interrupt distribution as long as the > > patching occurs before we take interrupts. For modules, the patching always > > occurs before we can run the module, so this should be equally safe. > > I agree it shouldn't matter either way -- until we've patched in > ARM64_HAS_GIC_PRIO_MASKING alternatives it's not going to matter. > > > The patch otherwise looks OK to me. > > Thanks! > > Do you have a preference between the ARM64_HAS_GIC_PRIO_RELAXED_SYNC or > ARM64_HAS_GIC_PRIO_PMHE options above? ARM64_HAS_GIC_PRIO_PMHE would have my preference (it spells out the feature that drives the property), but this requires a bit more work (a new patching callback), and probably results in more limited gains memory wise. Thanks, M.
On Tue, Jan 24, 2023 at 09:31:04AM +0000, Marc Zyngier wrote: > On Mon, 23 Jan 2023 14:30:17 +0000, > Mark Rutland <mark.rutland@arm.com> wrote: > > > > On Mon, Jan 23, 2023 at 01:23:31PM +0000, Marc Zyngier wrote: > > > On Mon, 23 Jan 2023 12:40:41 +0000, > > > Mark Rutland <mark.rutland@arm.com> wrote: > > > > > > > > When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR > > > > value to determine whether to signal an IRQ to a PE, and consequently > > > > after a change to the PMR value, a DSB SY may be required to ensure that > > > > interrupts are signalled to a CPU in finite time. When PMHE == 0b0, > > > > interrupts are always signalled to the relevant PE, and all masking > > > > occurs locally, without requiring a DSB SY. > > > > > > > > Since commit: > > > > > > > > f226650494c6aa87 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear") > > > > > > > > ... we handle this dynamically: in most cases a static key is used to > > > > determine whether to issue a DSB SY, but the entry code must read from > > > > ICC_CTLR_EL1 as static keys aren't accessible from plain assembly. > > > > > > > > It would be much nicer to use an alternative instruction sequence for > > > > the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the > > > > entry code, and for most other code this will result in simpler code > > > > generation with fewer instructions and fewer branches. > > > > > > > > This patch adds a new ARM64_HAS_GIC_PRIO_NO_PMHE cpucap which is only > > > > set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in use). > > > > This allows us to replace the existing users of the `gic_pmr_sync` > > > > static key with alternative sequences which default to a DSB SY and are > > > > relaxed to a NOP when PMHE is not in use. > > > > > > I personally find the "negative capability" pretty annoying, specially > > > considering that hardly anyone uses PMHE. The way the code reads with > > > this patch, it is always some sort of double negation. > > > > For the polarity and double-negation, I could rename this to > > ARM64_HAS_GIC_PRIO_RELAXED_SYNC, if that helps? > > It certainly reads much better. FWIW, I'll go with that for now; as below it's more painful to go from `NOP` to `DSB SY`, and either we end up needing a alt_patch_dsb_sy() callback, or generating the alternative sequences this patch was trying to avoid. I'll go clear up all the related naming and comments to talk in terms of "relaxed synchronization" rather than "no PMHE". Thanks, Mark. > > > Can't the DSB be patched-in instead, making the PMHE cap a "positive" > > > one? > > > > We could; my rationale for doing it this way is that we can use the common NOP > > patching helper, and avoid generating a copy of the `DSB SY` instruction per > > pmr_sync() call (which gets generated near to the call and never gets free, > > unlike the alt_instr entries), which adds up quickly when using pseudo-NMIs. > > Having an equivalent to alt_cb_patch_nops to patch in "DSB SY" would > result in similar gains, only less reusable... > > > > > > It shouldn't affect interrupt distribution as long as the > > > patching occurs before we take interrupts. For modules, the patching always > > > occurs before we can run the module, so this should be equally safe. > > > > I agree it shouldn't matter either way -- until we've patched in > > ARM64_HAS_GIC_PRIO_MASKING alternatives it's not going to matter. > > > > > The patch otherwise looks OK to me. > > > > Thanks! > > > > Do you have a preference between the ARM64_HAS_GIC_PRIO_RELAXED_SYNC or > > ARM64_HAS_GIC_PRIO_PMHE options above? > > ARM64_HAS_GIC_PRIO_PMHE would have my preference (it spells out the > feature that drives the property), but this requires a bit more work > (a new patching callback), and probably results in more limited gains > memory wise. > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible.
diff --git a/arch/arm/include/asm/arch_gicv3.h b/arch/arm/include/asm/arch_gicv3.h index f82a819eb0db..bc6d2a4362df 100644 --- a/arch/arm/include/asm/arch_gicv3.h +++ b/arch/arm/include/asm/arch_gicv3.h @@ -252,5 +252,10 @@ static inline void gic_arch_enable_irqs(void) WARN_ON_ONCE(true); } +static inline bool gic_uses_pmhe(void) +{ + return false; +} + #endif /* !__ASSEMBLY__ */ #endif /* !__ASM_ARCH_GICV3_H */ diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h index 48d4473e8eee..b57dfa6f5eb0 100644 --- a/arch/arm64/include/asm/arch_gicv3.h +++ b/arch/arm64/include/asm/arch_gicv3.h @@ -190,5 +190,10 @@ static inline void gic_arch_enable_irqs(void) asm volatile ("msr daifclr, #3" : : : "memory"); } +static inline bool gic_uses_pmhe(void) +{ + return !cpus_have_cap(ARM64_HAS_GIC_PRIO_NO_PMHE); +} + #endif /* __ASSEMBLY__ */ #endif /* __ASM_ARCH_GICV3_H */ diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index 2cfc4245d2e2..8d0f804572f3 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -11,6 +11,8 @@ #include <linux/kasan-checks.h> +#include <asm/alternative-macros.h> + #define __nops(n) ".rept " #n "\nnop\n.endr\n" #define nops(n) asm volatile(__nops(n)) @@ -41,10 +43,11 @@ #ifdef CONFIG_ARM64_PSEUDO_NMI #define pmr_sync() \ do { \ - extern struct static_key_false gic_pmr_sync; \ - \ - if (static_branch_unlikely(&gic_pmr_sync)) \ - dsb(sy); \ + asm volatile( \ + ALTERNATIVE_CB("dsb sy", \ + ARM64_HAS_GIC_PRIO_NO_PMHE, \ + alt_cb_patch_nops) \ + ); \ } while(0) #else #define pmr_sync() do {} while (0) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c4e9858a7b84..e8ad66088968 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2048,6 +2048,20 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry, { return enable_pseudo_nmi && has_useable_gicv3_cpuif(entry, scope); } + +static bool has_gic_no_pmhe(const struct arm64_cpu_capabilities *entry, + int scope) +{ + if (!cpus_have_cap(ARM64_HAS_GIC_PRIO_MASKING)) + return false; + + /* + * When Priority Mask Hint Enable (PMHE) == 0b0, PMR is not used as a + * hint for interrupt distribution, and a DSB is not necessary when + * unmasking IRQs via PMR. + */ + return !(gic_read_ctlr() & ICC_CTLR_EL1_PMHE_MASK); +} #endif #ifdef CONFIG_ARM64_BTI @@ -2543,6 +2557,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .sign = FTR_UNSIGNED, .min_field_value = 1, }, + { + /* + * Depends on ARM64_HAS_GIC_PRIO_MASKING + */ + .capability = ARM64_HAS_GIC_PRIO_NO_PMHE, + .type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, + .matches = has_gic_no_pmhe, + }, #endif #ifdef CONFIG_ARM64_E0PD { diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e2d1d3d5de1d..ec0be0424371 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -311,13 +311,16 @@ alternative_else_nop_endif .endif #ifdef CONFIG_ARM64_PSEUDO_NMI - /* Save pmr */ -alternative_if ARM64_HAS_GIC_PRIO_MASKING +alternative_if_not ARM64_HAS_GIC_PRIO_MASKING + b .Lskip_pmr_save\@ +alternative_else_nop_endif + mrs_s x20, SYS_ICC_PMR_EL1 str x20, [sp, #S_PMR_SAVE] mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET msr_s SYS_ICC_PMR_EL1, x20 -alternative_else_nop_endif + +.Lskip_pmr_save\@: #endif /* @@ -336,15 +339,19 @@ alternative_else_nop_endif .endif #ifdef CONFIG_ARM64_PSEUDO_NMI - /* Restore pmr */ -alternative_if ARM64_HAS_GIC_PRIO_MASKING +alternative_if_not ARM64_HAS_GIC_PRIO_MASKING + b .Lskip_pmr_restore\@ +alternative_else_nop_endif + ldr x20, [sp, #S_PMR_SAVE] msr_s SYS_ICC_PMR_EL1, x20 - mrs_s x21, SYS_ICC_CTLR_EL1 - tbz x21, #6, .L__skip_pmr_sync\@ // Check for ICC_CTLR_EL1.PMHE - dsb sy // Ensure priority change is seen by redistributor -.L__skip_pmr_sync\@: + + /* Ensure priority change is seen by redistributor */ +alternative_if_not ARM64_HAS_GIC_PRIO_NO_PMHE + dsb sy alternative_else_nop_endif + +.Lskip_pmr_restore\@: #endif ldp x21, x22, [sp, #S_PC] // load ELR, SPSR diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index d0e9bb5c91fc..97e750a35f70 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -67,9 +67,7 @@ KVM_NVHE_ALIAS(__hyp_stub_vectors); KVM_NVHE_ALIAS(vgic_v2_cpuif_trap); KVM_NVHE_ALIAS(vgic_v3_cpuif_trap); -/* Static key checked in pmr_sync(). */ #ifdef CONFIG_ARM64_PSEUDO_NMI -KVM_NVHE_ALIAS(gic_pmr_sync); /* Static key checked in GIC_PRIO_IRQOFF. */ KVM_NVHE_ALIAS(gic_nonsecure_priorities); #endif diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index d70435a1d48c..3cb5417b52fc 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -29,6 +29,7 @@ HAS_GENERIC_AUTH_ARCH_QARMA3 HAS_GENERIC_AUTH_ARCH_QARMA5 HAS_GENERIC_AUTH_IMP_DEF HAS_GIC_PRIO_MASKING +HAS_GIC_PRIO_NO_PMHE HAS_GIC_SYSREG_CPUIF HAS_LDAPR HAS_LSE_ATOMICS diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index 997104d4338e..64f6a868d77f 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -89,15 +89,6 @@ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key); */ static DEFINE_STATIC_KEY_FALSE(supports_pseudo_nmis); -/* - * Global static key controlling whether an update to PMR allowing more - * interrupts requires to be propagated to the redistributor (DSB SY). - * And this needs to be exported for modules to be able to enable - * interrupts... - */ -DEFINE_STATIC_KEY_FALSE(gic_pmr_sync); -EXPORT_SYMBOL(gic_pmr_sync); - DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities); EXPORT_SYMBOL(gic_nonsecure_priorities); @@ -1768,16 +1759,8 @@ static void gic_enable_nmi_support(void) for (i = 0; i < gic_data.ppi_nr; i++) refcount_set(&ppi_nmi_refs[i], 0); - /* - * Linux itself doesn't use 1:N distribution, so has no need to - * set PMHE. The only reason to have it set is if EL3 requires it - * (and we can't change it). - */ - if (gic_read_ctlr() & ICC_CTLR_EL1_PMHE_MASK) - static_branch_enable(&gic_pmr_sync); - pr_info("Pseudo-NMIs enabled using %s ICC_PMR_EL1 synchronisation\n", - static_branch_unlikely(&gic_pmr_sync) ? "forced" : "relaxed"); + gic_uses_pmhe() ? "forced" : "relaxed"); /* * How priority values are used by the GIC depends on two things:
When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR value to determine whether to signal an IRQ to a PE, and consequently after a change to the PMR value, a DSB SY may be required to ensure that interrupts are signalled to a CPU in finite time. When PMHE == 0b0, interrupts are always signalled to the relevant PE, and all masking occurs locally, without requiring a DSB SY. Since commit: f226650494c6aa87 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear") ... we handle this dynamically: in most cases a static key is used to determine whether to issue a DSB SY, but the entry code must read from ICC_CTLR_EL1 as static keys aren't accessible from plain assembly. It would be much nicer to use an alternative instruction sequence for the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the entry code, and for most other code this will result in simpler code generation with fewer instructions and fewer branches. This patch adds a new ARM64_HAS_GIC_PRIO_NO_PMHE cpucap which is only set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in use). This allows us to replace the existing users of the `gic_pmr_sync` static key with alternative sequences which default to a DSB SY and are relaxed to a NOP when PMHE is not in use. The entry assembly management of the PMR is slightly restructured to use a branch (rather than multiple NOPs) when priority masking is not in use. This is more in keeping with other alternatives in the entry assembly, and permits the use of a separate alternatives for the PMHE-dependent DSB SY (and removal of the conditional branch this currently requires). For consistency I've adjusted both the save and restore paths. According to bloat-o-meter, when building defconfig + CONFIG_ARM64_PSEUDO_NMI=y this shrinks the kernel text by ~4KiB: | add/remove: 4/2 grow/shrink: 42/310 up/down: 332/-5032 (-4700) The resulting vmlinux is ~66KiB smaller, though the resulting Image size is unchanged due to padding and alignment: | [mark@lakrids:~/src/linux]% ls -al vmlinux-* | -rwxr-xr-x 1 mark mark 137508344 Jan 17 14:11 vmlinux-after | -rwxr-xr-x 1 mark mark 137575440 Jan 17 13:49 vmlinux-before | [mark@lakrids:~/src/linux]% ls -al Image-* | -rw-r--r-- 1 mark mark 38777344 Jan 17 14:11 Image-after | -rw-r--r-- 1 mark mark 38777344 Jan 17 13:49 Image-before Prior to this patch we did not verify the state of ICC_CTLR_EL1.PMHE on secondary CPUs. As of this patch this is verified by the cpufeature code when using GIC priority masking (i.e. when using pseudo-NMIs). Note that since commit: 7e3a57fa6ca831fa ("arm64: Document ICC_CTLR_EL3.PMHE setting requirements") ... Documentation/arm64/booting.rst specifies: | - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across | all CPUs the kernel is executing on, and must stay constant | for the lifetime of the kernel. ... so that should not adversely affect any compliant systems, and as we'll only check for the absense of PMHE when using pseudo-NMIs, this will only fire when such mismatch will adversely affect the system. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> --- arch/arm/include/asm/arch_gicv3.h | 5 +++++ arch/arm64/include/asm/arch_gicv3.h | 5 +++++ arch/arm64/include/asm/barrier.h | 11 +++++++---- arch/arm64/kernel/cpufeature.c | 22 ++++++++++++++++++++++ arch/arm64/kernel/entry.S | 25 ++++++++++++++++--------- arch/arm64/kernel/image-vars.h | 2 -- arch/arm64/tools/cpucaps | 1 + drivers/irqchip/irq-gic-v3.c | 19 +------------------ 8 files changed, 57 insertions(+), 33 deletions(-)