diff mbox series

[1/6] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more efficient

Message ID 20240625190719.788643-2-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series xen: Rework for_each_set_bit() | expand

Commit Message

Andrew Cooper June 25, 2024, 7:07 p.m. UTC
There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
the CPU and dirties it even if there's nothing outstanding, but the final
for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
atomic updates to the same IRR word.

Rewrite it from scratch, explaining what's going on at each step.

Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the
removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

The main purpose of this is to get rid of for_each_set_bit().

Full side-by-side diff:
  https://termbin.com/5wsb

The first loop ends up being unrolled identically, although there's 64bit movs
to reload 0 for the xchg which is definitely suboptimal.  Opencoding
asm ("xchg") without a memory clobber gets to 32bit movs which is an
improvement but no ideal.  However I didn't fancy going that far.

Also, the entirety of pi_desc is embedded in struct vcpu, which means when
we're executing in Xen, the prefetcher is going to be stealing it back from
the IOMMU all the time.  This is a datastructure which really should *not* be
adjacent to all the other misc data in the vcpu.
---
 xen/arch/x86/hvm/vmx/vmx.c | 61 +++++++++++++++++++++++++++++++++-----
 1 file changed, 53 insertions(+), 8 deletions(-)

Comments

Jan Beulich June 26, 2024, 9:49 a.m. UTC | #1
On 25.06.2024 21:07, Andrew Cooper wrote:
> There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
> the CPU and dirties it even if there's nothing outstanding, but the final
> for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
> atomic updates to the same IRR word.

The way it's worded (grammar wise) it appears as if the 2nd issue is missing
from this description. Perhaps you meant to break the sentence at "but" (and
re-word a little what follows), which feels a little unmotivated to me (as a
non-native speaker, i.e. may not mean anything) anyway? Or maybe something
simply got lost in the middle?

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2321,18 +2321,63 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
>  
>  static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
>  {
> -    struct vlapic *vlapic = vcpu_vlapic(v);
> -    unsigned int group, i;
> -    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
> +    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
> +    union {
> +        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
> +        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
> +    } vec;
> +    uint32_t *irr;
> +    bool on;
>  
> -    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
> +    /*
> +     * The PIR is a contended cacheline which bounces between the CPU and
> +     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
> +     * express the same on the CPU side, so care has to be taken.
> +     *
> +     * First, do a plain read of ON.  If the PIR hasn't been modified, this
> +     * will keep the cacheline Shared and not pull it Excusive on the CPU.
> +     */
> +    if ( !pi_test_on(desc) )
>          return;
>  
> -    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
> -        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
> +    /*
> +     * Second, if the plain read said that ON was set, we must clear it with
> +     * an atomic action.  This will bring the cachline to Exclusive on the
> +     * CPU.
> +     *
> +     * This should always succeed because noone else should be playing with
> +     * the PIR behind our back, but assert so just in case.
> +     */

Isn't "playing with" more strict than what is the case, and what we need
here? Aiui nothing should _clear this bit_ behind our back, while PIR
covers more than just this one bit, and the bit may also become reset
immediately after we cleared it.

> +    on = pi_test_and_clear_on(desc);
> +    ASSERT(on);
>  
> -    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
> -        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
> +    /*
> +     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
> +     * (via ON being set) thatat least one vector is pending too.

This isn't quite correct aiui, and hence perhaps better not to state it
exactly like this: While we're ...

>  Atomically
> +     * read and clear the entire pending bitmap as fast as we, to reduce the
> +     * window that the IOMMU may steal the cacheline back from us.
> +     *
> +     * It is a performance concern, but not a correctness concern.  If the
> +     * IOMMU does steal the cacheline back, we'll just wait to get it back
> +     * again.
> +     */
> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
> +        vec._64[i] = xchg(&desc->pir[i], 0);

... still ahead of or in this loop, new bits may become set which we then
may handle right away. The "on" indication on the next entry into this
logic may then be misleading, as we may not find any set bit.

All the code changes look good to me, otoh.

Jan

> +    /*
> +     * Finally, merge the pending vectors into IRR.  The IRR register is
> +     * scattered in memory, so we have to do this 32 bits at a time.
> +     */
> +    irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR];
> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i )
> +    {
> +        if ( !vec._32[i] )
> +            continue;
> +
> +        asm ( "lock or %[val], %[irr]"
> +              : [irr] "+m" (irr[i * 0x10])
> +              : [val] "r" (vec._32[i]) );
> +    }
>  }
>  
>  static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
Andrew Cooper June 26, 2024, 12:54 p.m. UTC | #2
On 26/06/2024 10:49 am, Jan Beulich wrote:
> On 25.06.2024 21:07, Andrew Cooper wrote:
>> There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
>> the CPU and dirties it even if there's nothing outstanding, but the final
>> for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
>> atomic updates to the same IRR word.
> The way it's worded (grammar wise) it appears as if the 2nd issue is missing
> from this description. Perhaps you meant to break the sentence at "but" (and
> re-word a little what follows), which feels a little unmotivated to me (as a
> non-native speaker, i.e. may not mean anything) anyway? Or maybe something
> simply got lost in the middle?

You're right - the grammar is not great.  I'll rework it.

>
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -2321,18 +2321,63 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
>>  
>>  static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
>>  {
>> -    struct vlapic *vlapic = vcpu_vlapic(v);
>> -    unsigned int group, i;
>> -    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
>> +    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
>> +    union {
>> +        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
>> +        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
>> +    } vec;
>> +    uint32_t *irr;
>> +    bool on;
>>  
>> -    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
>> +    /*
>> +     * The PIR is a contended cacheline which bounces between the CPU and
>> +     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
>> +     * express the same on the CPU side, so care has to be taken.
>> +     *
>> +     * First, do a plain read of ON.  If the PIR hasn't been modified, this
>> +     * will keep the cacheline Shared and not pull it Excusive on the CPU.
>> +     */
>> +    if ( !pi_test_on(desc) )
>>          return;
>>  
>> -    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
>> -        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
>> +    /*
>> +     * Second, if the plain read said that ON was set, we must clear it with
>> +     * an atomic action.  This will bring the cachline to Exclusive on the
>> +     * CPU.
>> +     *
>> +     * This should always succeed because noone else should be playing with
>> +     * the PIR behind our back, but assert so just in case.
>> +     */
> Isn't "playing with" more strict than what is the case, and what we need
> here? Aiui nothing should _clear this bit_ behind our back, while PIR
> covers more than just this one bit, and the bit may also become reset
> immediately after we cleared it.

The IOMMU or other CPU forwarding an IPI strictly sets ON, and this CPU
(either the logic here, or microcode when in non-root mode) strictly
clears it.

But it is ON specifically that we care about, so I'll make that more clear.

>
>> +    on = pi_test_and_clear_on(desc);
>> +    ASSERT(on);
>>  
>> -    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
>> -        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
>> +    /*
>> +     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
>> +     * (via ON being set) thatat least one vector is pending too.
> This isn't quite correct aiui, and hence perhaps better not to state it
> exactly like this: While we're ...
>
>>  Atomically
>> +     * read and clear the entire pending bitmap as fast as we, to reduce the
>> +     * window that the IOMMU may steal the cacheline back from us.
>> +     *
>> +     * It is a performance concern, but not a correctness concern.  If the
>> +     * IOMMU does steal the cacheline back, we'll just wait to get it back
>> +     * again.
>> +     */
>> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
>> +        vec._64[i] = xchg(&desc->pir[i], 0);
> ... still ahead of or in this loop, new bits may become set which we then
> may handle right away. The "on" indication on the next entry into this
> logic may then be misleading, as we may not find any set bit.

Hmm.  Yes.

The IOMMU atomically swaps the entire cacheline in one go, so won't
produce this state.  However, the SDM algorithm for consuming it says
specifically:

1) LOCK AND to clear ON, leaving everything else unmodified
2) 256-bit-granular read&0 PIR, merging into VIRR

Which now I think about it, is almost certainly because the Atom cores
only have a 256-bit datapath.

Another Xen CPU sending an IPI sideways will have to do:

    set_bit(pir[vec])
    set_bit(ON)
    IPI(delivery vector)

in this order for everything to work.

ON is the signal for "go and scan the PIR", and (other than exact
ordering races during delivery) means that there is a bit set in PIR.

I'll see if I can make this clearer.


> All the code changes look good to me, otoh.

Thanks.

~Andrew
Andrew Cooper June 26, 2024, 1:04 p.m. UTC | #3
On 26/06/2024 1:54 pm, Andrew Cooper wrote:
> On 26/06/2024 10:49 am, Jan Beulich wrote:
>> On 25.06.2024 21:07, Andrew Cooper wrote:
>>> There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
>>> the CPU and dirties it even if there's nothing outstanding, but the final
>>> for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
>>> atomic updates to the same IRR word.
>> The way it's worded (grammar wise) it appears as if the 2nd issue is missing
>> from this description. Perhaps you meant to break the sentence at "but" (and
>> re-word a little what follows), which feels a little unmotivated to me (as a
>> non-native speaker, i.e. may not mean anything) anyway? Or maybe something
>> simply got lost in the middle?
> You're right - the grammar is not great.  I'll rework it.
>
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -2321,18 +2321,63 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
>>>  
>>>  static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
>>>  {
>>> -    struct vlapic *vlapic = vcpu_vlapic(v);
>>> -    unsigned int group, i;
>>> -    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
>>> +    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
>>> +    union {
>>> +        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
>>> +        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
>>> +    } vec;
>>> +    uint32_t *irr;
>>> +    bool on;
>>>  
>>> -    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
>>> +    /*
>>> +     * The PIR is a contended cacheline which bounces between the CPU and
>>> +     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
>>> +     * express the same on the CPU side, so care has to be taken.
>>> +     *
>>> +     * First, do a plain read of ON.  If the PIR hasn't been modified, this
>>> +     * will keep the cacheline Shared and not pull it Excusive on the CPU.
>>> +     */
>>> +    if ( !pi_test_on(desc) )
>>>          return;
>>>  
>>> -    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
>>> -        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
>>> +    /*
>>> +     * Second, if the plain read said that ON was set, we must clear it with
>>> +     * an atomic action.  This will bring the cachline to Exclusive on the
>>> +     * CPU.
>>> +     *
>>> +     * This should always succeed because noone else should be playing with
>>> +     * the PIR behind our back, but assert so just in case.
>>> +     */
>> Isn't "playing with" more strict than what is the case, and what we need
>> here? Aiui nothing should _clear this bit_ behind our back, while PIR
>> covers more than just this one bit, and the bit may also become reset
>> immediately after we cleared it.
> The IOMMU or other CPU forwarding an IPI strictly sets ON, and this CPU
> (either the logic here, or microcode when in non-root mode) strictly
> clears it.
>
> But it is ON specifically that we care about, so I'll make that more clear.
>
>>> +    on = pi_test_and_clear_on(desc);
>>> +    ASSERT(on);
>>>  
>>> -    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
>>> -        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
>>> +    /*
>>> +     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
>>> +     * (via ON being set) thatat least one vector is pending too.
>> This isn't quite correct aiui, and hence perhaps better not to state it
>> exactly like this: While we're ...
>>
>>>  Atomically
>>> +     * read and clear the entire pending bitmap as fast as we, to reduce the
>>> +     * window that the IOMMU may steal the cacheline back from us.
>>> +     *
>>> +     * It is a performance concern, but not a correctness concern.  If the
>>> +     * IOMMU does steal the cacheline back, we'll just wait to get it back
>>> +     * again.
>>> +     */
>>> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
>>> +        vec._64[i] = xchg(&desc->pir[i], 0);
>> ... still ahead of or in this loop, new bits may become set which we then
>> may handle right away. The "on" indication on the next entry into this
>> logic may then be misleading, as we may not find any set bit.
> Hmm.  Yes.
>
> The IOMMU atomically swaps the entire cacheline in one go, so won't
> produce this state.  However, the SDM algorithm for consuming it says
> specifically:
>
> 1) LOCK AND to clear ON, leaving everything else unmodified
> 2) 256-bit-granular read&0 PIR, merging into VIRR
>
> Which now I think about it, is almost certainly because the Atom cores
> only have a 256-bit datapath.
>
> Another Xen CPU sending an IPI sideways will have to do:
>
>     set_bit(pir[vec])
>     set_bit(ON)
>     IPI(delivery vector)
>
> in this order for everything to work.
>
> ON is the signal for "go and scan the PIR", and (other than exact
> ordering races during delivery) means that there is a bit set in PIR.
>
> I'll see if I can make this clearer.
>
>
>> All the code changes look good to me, otoh.
> Thanks.

One final thing.

This logic here depends on interrupts not being enabled between these
atomic actions, and entering non-root mode.

Specifically, Xen must not service a pending delivery-notification
vector between this point and the VMEntry microcode repeating the same
scan on the PIR Descriptor.

Getting this wrong means that we'll miss the delivery of vectors which
arrive between here and the next time something causes a
delivery-notification vector to be sent.

However, I've got no idea how to reasonably express this with
assertions.  We could in principle have a per-cpu "mustn't enable
interrupts" flag, checked in local_irq_enable/restore(), but it only
works in HVM context, and gets too messy IMO.

~Andrew
Jan Beulich June 26, 2024, 1:16 p.m. UTC | #4
On 26.06.2024 15:04, Andrew Cooper wrote:
> One final thing.
> 
> This logic here depends on interrupts not being enabled between these
> atomic actions, and entering non-root mode.
> 
> Specifically, Xen must not service a pending delivery-notification
> vector between this point and the VMEntry microcode repeating the same
> scan on the PIR Descriptor.
> 
> Getting this wrong means that we'll miss the delivery of vectors which
> arrive between here and the next time something causes a
> delivery-notification vector to be sent.
> 
> However, I've got no idea how to reasonably express this with
> assertions.  We could in principle have a per-cpu "mustn't enable
> interrupts" flag, checked in local_irq_enable/restore(), but it only
> works in HVM context, and gets too messy IMO.

I agree. It's also nothing this patch changes; it was like this before
already. If and when we can think of a good way of expressing it, then
surely we could improve things here.

Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f16faa6a615c..948ad48a4757 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2321,18 +2321,63 @@  static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
 
 static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
 {
-    struct vlapic *vlapic = vcpu_vlapic(v);
-    unsigned int group, i;
-    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
+    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
+    union {
+        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
+        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
+    } vec;
+    uint32_t *irr;
+    bool on;
 
-    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
+    /*
+     * The PIR is a contended cacheline which bounces between the CPU and
+     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
+     * express the same on the CPU side, so care has to be taken.
+     *
+     * First, do a plain read of ON.  If the PIR hasn't been modified, this
+     * will keep the cacheline Shared and not pull it Excusive on the CPU.
+     */
+    if ( !pi_test_on(desc) )
         return;
 
-    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
-        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
+    /*
+     * Second, if the plain read said that ON was set, we must clear it with
+     * an atomic action.  This will bring the cachline to Exclusive on the
+     * CPU.
+     *
+     * This should always succeed because noone else should be playing with
+     * the PIR behind our back, but assert so just in case.
+     */
+    on = pi_test_and_clear_on(desc);
+    ASSERT(on);
 
-    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
-        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
+    /*
+     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
+     * (via ON being set) thatat least one vector is pending too.  Atomically
+     * read and clear the entire pending bitmap as fast as we, to reduce the
+     * window that the IOMMU may steal the cacheline back from us.
+     *
+     * It is a performance concern, but not a correctness concern.  If the
+     * IOMMU does steal the cacheline back, we'll just wait to get it back
+     * again.
+     */
+    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
+        vec._64[i] = xchg(&desc->pir[i], 0);
+
+    /*
+     * Finally, merge the pending vectors into IRR.  The IRR register is
+     * scattered in memory, so we have to do this 32 bits at a time.
+     */
+    irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR];
+    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i )
+    {
+        if ( !vec._32[i] )
+            continue;
+
+        asm ( "lock or %[val], %[irr]"
+              : [irr] "+m" (irr[i * 0x10])
+              : [val] "r" (vec._32[i]) );
+    }
 }
 
 static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)