diff mbox series

[v2,2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs

Message ID 1606960706-21274-2-git-send-email-igor.druzhinin@citrix.com (mailing list archive)
State Superseded
Headers show
Series [v2,1/2] x86/IRQ: make max number of guests for a shared IRQ configurable | expand

Commit Message

Igor Druzhinin Dec. 3, 2020, 1:58 a.m. UTC
... and increase default "irq_max_guests" to 32.

It's not necessary to have an array of a size more than 1 for non-shareable
IRQs and it might impact scalability in case of high "irq_max_guests"
values being used - every IRQ in the system including MSIs would be
supplied with an array of that size.

Since it's now less impactful to use higher "irq_max_guests" value - bump
the default to 32. That should give more headroom for future systems.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---

New in v2.
This is suggested by Jan and is optional for me.

---
 docs/misc/xen-command-line.pandoc | 2 +-
 xen/arch/x86/irq.c                | 7 ++++---
 2 files changed, 5 insertions(+), 4 deletions(-)

Comments

Jan Beulich Dec. 3, 2020, 8:49 a.m. UTC | #1
On 03.12.2020 02:58, Igor Druzhinin wrote:
> @@ -1540,6 +1540,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>      unsigned int        irq;
>      struct irq_desc         *desc;
>      irq_guest_action_t *action, *newaction = NULL;
> +    unsigned int        max_nr_guests = will_share ? irq_max_guests : 1;
>      int                 rc = 0;
>  
>      WARN_ON(!spin_is_locked(&v->domain->event_lock));
> @@ -1571,7 +1572,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>          {
>              spin_unlock_irq(&desc->lock);
>              if ( (newaction = xmalloc_bytes(sizeof(irq_guest_action_t) +
> -                  irq_max_guests * sizeof(action->guest[0]))) != NULL &&
> +                  max_nr_guests * sizeof(action->guest[0]))) != NULL &&
>                   zalloc_cpumask_var(&newaction->cpu_eoi_map) )
>                  goto retry;
>              xfree(newaction);
> @@ -1640,7 +1641,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>          goto retry;
>      }
>  
> -    if ( action->nr_guests == irq_max_guests )
> +    if ( action->nr_guests == max_nr_guests )
>      {
>          printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
>                 "Already at max share %u, increase with irq_max_guests= option.\n",

Just as a minor remark - I don't think this last hunk is necessary,
as the !will_share case won't make it here unless action->nr_guests
is still zero. At which point the need for the new local variable
would also disappear. But I'm not going to insist, as there may be
the view that the code is more clear this way. However, if clarity
(in the sense of "obviously correct") is the goal, then I think
using >= instead of == would now become preferable.

Jan
diff mbox series

Patch

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index f5f230c..dea2a22 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1644,7 +1644,7 @@  This option is ignored in **pv-shim** mode.
 ### irq_max_guests (x86)
 > `= <integer>`
 
-> Default: `16`
+> Default: `32`
 
 Maximum number of guests IRQ could be shared between, i.e. a limit on
 the number of guests it is possible to start each having assigned a device
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 5ae9846..70b7a53 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -440,7 +440,7 @@  int __init init_irq_data(void)
         irq_to_desc(irq)->irq = irq;
 
     if ( !irq_max_guests || irq_max_guests > 255)
-        irq_max_guests = 16;
+        irq_max_guests = 32;
 
 #ifdef CONFIG_PV
     /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
@@ -1540,6 +1540,7 @@  int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
     unsigned int        irq;
     struct irq_desc         *desc;
     irq_guest_action_t *action, *newaction = NULL;
+    unsigned int        max_nr_guests = will_share ? irq_max_guests : 1;
     int                 rc = 0;
 
     WARN_ON(!spin_is_locked(&v->domain->event_lock));
@@ -1571,7 +1572,7 @@  int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         {
             spin_unlock_irq(&desc->lock);
             if ( (newaction = xmalloc_bytes(sizeof(irq_guest_action_t) +
-                  irq_max_guests * sizeof(action->guest[0]))) != NULL &&
+                  max_nr_guests * sizeof(action->guest[0]))) != NULL &&
                  zalloc_cpumask_var(&newaction->cpu_eoi_map) )
                 goto retry;
             xfree(newaction);
@@ -1640,7 +1641,7 @@  int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         goto retry;
     }
 
-    if ( action->nr_guests == irq_max_guests )
+    if ( action->nr_guests == max_nr_guests )
     {
         printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
                "Already at max share %u, increase with irq_max_guests= option.\n",