Message ID | a2fabad3-2a05-da71-64b8-bd77ac955b82@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | x86/HVM: implement memory read caching | expand |
On 31/01/2020 16:46, Jan Beulich wrote: > It needs calculating only in one out of three cases. Re-structure the > code a little such that the variable truly gets calculated only when we > don't get any insn bytes from elsewhere, and hence need to (try to) > fetch them. Also OR in PFEC_insn_fetch right in the initializer. > > While in this mood, restrict addr's scope as well. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> -----Original Message----- > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Jan > Beulich > Sent: 31 January 2020 16:46 > To: xen-devel@lists.xenproject.org > Cc: George Dunlap <George.Dunlap@eu.citrix.com>; Andrew Cooper > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei > Liu <wl@xen.org>; Paul Durrant <paul@xen.org> > Subject: [Xen-devel] [PATCH v4 7/7] x86/HVM: reduce scope of pfec in > hvm_emulate_init_per_insn() > > It needs calculating only in one out of three cases. Re-structure the > code a little such that the variable truly gets calculated only when we > don't get any insn bytes from elsewhere, and hence need to (try to) > fetch them. Also OR in PFEC_insn_fetch right in the initializer. > > While in this mood, restrict addr's scope as well. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <pdurrant@amazon.com> > --- > v4: New. > > --- a/xen/arch/x86/hvm/emulate.c > +++ b/xen/arch/x86/hvm/emulate.c > @@ -2762,8 +2762,6 @@ void hvm_emulate_init_per_insn( > unsigned int insn_bytes) > { > struct vcpu *curr = current; > - unsigned int pfec = PFEC_page_present; > - unsigned long addr; > > hvmemul_ctxt->ctxt.lma = hvm_long_mode_active(curr); > > @@ -2778,14 +2776,23 @@ void hvm_emulate_init_per_insn( > hvmemul_ctxt->seg_reg[x86_seg_ss].db ? 32 : 16; > } > > - if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) > - pfec |= PFEC_user_mode; > - > hvmemul_ctxt->insn_buf_eip = hvmemul_ctxt->ctxt.regs->rip; > - if ( !insn_bytes ) > + > + if ( insn_bytes ) > { > + hvmemul_ctxt->insn_buf_bytes = insn_bytes; > + memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes); > + } > + else if ( !(hvmemul_ctxt->insn_buf_bytes = > + hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf)) ) > + { > + unsigned int pfec = PFEC_page_present | PFEC_insn_fetch; > + unsigned long addr; > + > + if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) > + pfec |= PFEC_user_mode; > + > hvmemul_ctxt->insn_buf_bytes = > - hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?: > (hvm_virtual_to_linear_addr(x86_seg_cs, > &hvmemul_ctxt->seg_reg[x86_seg_cs], > hvmemul_ctxt->insn_buf_eip, > @@ -2795,15 +2802,9 @@ void hvm_emulate_init_per_insn( > &addr) && > hvm_copy_from_guest_linear(hvmemul_ctxt->insn_buf, addr, > sizeof(hvmemul_ctxt->insn_buf), > - pfec | PFEC_insn_fetch, > - NULL) == HVMTRANS_okay) ? > + pfec, NULL) == HVMTRANS_okay) ? > sizeof(hvmemul_ctxt->insn_buf) : 0; > } > - else > - { > - hvmemul_ctxt->insn_buf_bytes = insn_bytes; > - memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes); > - } > } > > void hvm_emulate_writeback( > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xenproject.org > https://lists.xenproject.org/mailman/listinfo/xen-devel
--- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -2762,8 +2762,6 @@ void hvm_emulate_init_per_insn( unsigned int insn_bytes) { struct vcpu *curr = current; - unsigned int pfec = PFEC_page_present; - unsigned long addr; hvmemul_ctxt->ctxt.lma = hvm_long_mode_active(curr); @@ -2778,14 +2776,23 @@ void hvm_emulate_init_per_insn( hvmemul_ctxt->seg_reg[x86_seg_ss].db ? 32 : 16; } - if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) - pfec |= PFEC_user_mode; - hvmemul_ctxt->insn_buf_eip = hvmemul_ctxt->ctxt.regs->rip; - if ( !insn_bytes ) + + if ( insn_bytes ) { + hvmemul_ctxt->insn_buf_bytes = insn_bytes; + memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes); + } + else if ( !(hvmemul_ctxt->insn_buf_bytes = + hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf)) ) + { + unsigned int pfec = PFEC_page_present | PFEC_insn_fetch; + unsigned long addr; + + if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) + pfec |= PFEC_user_mode; + hvmemul_ctxt->insn_buf_bytes = - hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?: (hvm_virtual_to_linear_addr(x86_seg_cs, &hvmemul_ctxt->seg_reg[x86_seg_cs], hvmemul_ctxt->insn_buf_eip, @@ -2795,15 +2802,9 @@ void hvm_emulate_init_per_insn( &addr) && hvm_copy_from_guest_linear(hvmemul_ctxt->insn_buf, addr, sizeof(hvmemul_ctxt->insn_buf), - pfec | PFEC_insn_fetch, - NULL) == HVMTRANS_okay) ? + pfec, NULL) == HVMTRANS_okay) ? sizeof(hvmemul_ctxt->insn_buf) : 0; } - else - { - hvmemul_ctxt->insn_buf_bytes = insn_bytes; - memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes); - } } void hvm_emulate_writeback(
It needs calculating only in one out of three cases. Re-structure the code a little such that the variable truly gets calculated only when we don't get any insn bytes from elsewhere, and hence need to (try to) fetch them. Also OR in PFEC_insn_fetch right in the initializer. While in this mood, restrict addr's scope as well. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v4: New.