Message ID | 20250304232218.2768344-1-andrew.cooper3@citrix.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more efficient | expand |
On 05.03.2025 00:22, Andrew Cooper wrote: > There are two issues. First, pi_test_and_clear_on() pulls the cache-line to > the CPU and dirties it even if there's nothing outstanding, but the final > bitmap_for_each() is O(256) when O(8) would do, and would avoid multiple > atomic updates to the same IRR word. > > Rewrite it from scratch, explaining what's going on at each step. > > Bloat-o-meter reports 177 -> 145 (net -32), but real improvement is the > removal of calls to __find_{first,next}_bit() hidden behind bitmap_for_each(). Nit: As said in reply to v2, there are no underscores on the two find functions bitmap_for_each() uses. > No functional change. > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> > --- > CC: Jan Beulich <JBeulich@suse.com> > CC: Roger Pau Monné <roger.pau@citrix.com> > > https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1699791518 > > v3: > * Fix IRR scatter address calculation > * Spelling/Grammar improvements The description starting with "There are two issues" I fear it still doesn't become quite clear what the 2nd issue is. I can only assume it's the use of bitmap_for_each() that now goes away. Preferably with this tweaked a little further Reviewed-by: Jan Beulich <jbeulich@suse.com> Jan
On 05/03/2025 11:36 am, Jan Beulich wrote: > On 05.03.2025 00:22, Andrew Cooper wrote: >> There are two issues. First, pi_test_and_clear_on() pulls the cache-line to >> the CPU and dirties it even if there's nothing outstanding, but the final >> bitmap_for_each() is O(256) when O(8) would do, and would avoid multiple >> atomic updates to the same IRR word. >> >> Rewrite it from scratch, explaining what's going on at each step. >> >> Bloat-o-meter reports 177 -> 145 (net -32), but real improvement is the >> removal of calls to __find_{first,next}_bit() hidden behind bitmap_for_each(). > Nit: As said in reply to v2, there are no underscores on the two find > functions bitmap_for_each() uses. I did change the commit message assuming you were right, but the disassembly never lies. What bitmap_for_each() uses are very much not functions in x86. >> No functional change. >> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> >> --- >> CC: Jan Beulich <JBeulich@suse.com> >> CC: Roger Pau Monné <roger.pau@citrix.com> >> >> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1699791518 >> >> v3: >> * Fix IRR scatter address calculation >> * Spelling/Grammar improvements > The description starting with "There are two issues" I fear it still > doesn't become quite clear what the 2nd issue is. I can only assume it's > the use of bitmap_for_each() that now goes away. > > Preferably with this tweaked a little further > Reviewed-by: Jan Beulich <jbeulich@suse.com> Oh. ", but the final" can turn into ", and second,". That should make it clearer. Thanks, ~Andrew
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index d87da427ade1..ab881d6f8c42 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2314,18 +2314,72 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector) static void cf_check vmx_sync_pir_to_irr(struct vcpu *v) { - struct vlapic *vlapic = vcpu_vlapic(v); - unsigned int group, i; - DECLARE_BITMAP(pending_intr, X86_IDT_VECTORS); + struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc; + union { + unsigned long _ul[X86_IDT_VECTORS / BITS_PER_LONG]; + uint32_t _32[X86_IDT_VECTORS / (sizeof(uint32_t) * 8)]; + } vec; + uint32_t *irr; + bool on; - if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) ) + /* + * The PIR is a contended cacheline which bounces between the CPU(s) and + * IOMMU(s). An IOMMU updates the entire PIR atomically, but we can't + * express the same on the CPU side, so care has to be taken. + * + * First, do a plain read of ON. If the PIR hasn't been modified, this + * will keep the cacheline Shared and not pull it Exclusive on the current + * CPU. + */ + if ( !pi_test_on(desc) ) return; - for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ ) - pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group); + /* + * Second, if the plain read said that ON was set, we must clear it with + * an atomic action. This will bring the cacheline to Exclusive on the + * current CPU. + * + * This should always succeed because no-one else should be playing with + * the PIR behind our back, but assert so just in case. + */ + on = pi_test_and_clear_on(desc); + ASSERT(on); + + /* + * The cacheline will have become Exclusive on the current CPU, and + * because ON was set, some other entity (an IOMMU, or Xen on another CPU) + * has indicated that the PIR needs re-scanning. + * + * Note: Entities which can't update the entire cacheline atomically + * (i.e. Xen on another CPU) are required to update PIR first, then + * set ON. Therefore, there is a corner case where we may have + * found and processed the PIR updates "last time around" and only + * found ON this time around. This is fine; the logic still + * operates correctly. + * + * Atomically read and clear the entire pending bitmap as fast as we can, + * to reduce the window where another entity may steal the cacheline back + * from us. This is a performance concern, not a correctness concern; if + * another entity does steal the cacheline, we'll just wait for it to + * return. + */ + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._ul); ++i ) + vec._ul[i] = xchg(&desc->pir[i], 0); + + /* + * Finally, merge the pending vectors into IRR. The IRR register is + * scattered in memory, so we have to do this 32 bits at a time. + */ + irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR]; + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i ) + { + if ( !vec._32[i] ) + continue; - bitmap_for_each ( i, pending_intr, X86_IDT_VECTORS ) - vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]); + asm ( "lock or %[val], %[irr]" + : [irr] "+m" (irr[i * 4]) + : [val] "r" (vec._32[i]) ); + } } static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
There are two issues. First, pi_test_and_clear_on() pulls the cache-line to the CPU and dirties it even if there's nothing outstanding, but the final bitmap_for_each() is O(256) when O(8) would do, and would avoid multiple atomic updates to the same IRR word. Rewrite it from scratch, explaining what's going on at each step. Bloat-o-meter reports 177 -> 145 (net -32), but real improvement is the removal of calls to __find_{first,next}_bit() hidden behind bitmap_for_each(). No functional change. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> --- CC: Jan Beulich <JBeulich@suse.com> CC: Roger Pau Monné <roger.pau@citrix.com> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1699791518 v3: * Fix IRR scatter address calculation * Spelling/Grammar improvements v2: * Extend the comments --- xen/arch/x86/hvm/vmx/vmx.c | 70 +++++++++++++++++++++++++++++++++----- 1 file changed, 62 insertions(+), 8 deletions(-)