diff mbox

[5/6,v5] kvm: booke: clear host tlb reference flag on guest tlb invalidation

Message ID 1379570566-3715-6-git-send-email-Bharat.Bhushan@freescale.com (mailing list archive)
State New, archived
Headers show

Commit Message

Bharat Bhushan Sept. 19, 2013, 6:02 a.m. UTC
On booke, "struct tlbe_ref" contains host tlb mapping information
(pfn: for guest-pfn to pfn, flags: attribute associated with this mapping)
for a guest tlb entry. So when a guest creates a TLB entry then
"struct tlbe_ref" is set to point to valid "pfn" and set attributes in
"flags" field of the above said structure. When a guest TLB entry is
invalidated then flags field of corresponding "struct tlbe_ref" is
updated to point that this is no more valid, also we selectively clear
some other attribute bits, example: if E500_TLB_BITMAP was set then we clear
E500_TLB_BITMAP, if E500_TLB_TLB0 is set then we clear this.

Ideally we should clear complete "flags" as this entry is invalid and does not
have anything to re-used. The other part of the problem is that when we use
the same entry again then also we do not clear (started doing or-ing etc).

So far it was working because the selectively clearing mentioned above
actually clears "flags" what was set during TLB mapping. But the problem
starts coming when we add more attributes to this then we need to selectively
clear them and which is not needed.

This patch we do both
        - Clear "flags" when invalidating;
        - Clear "flags" when reusing same entry later

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v3-> v5
 - New patch (found this issue when doing vfio-pci development)

 arch/powerpc/kvm/e500_mmu_host.c |   12 +++++++-----
 1 files changed, 7 insertions(+), 5 deletions(-)

Comments

Scott Wood Sept. 19, 2013, 9:07 p.m. UTC | #1
On Thu, 2013-09-19 at 11:32 +0530, Bharat Bhushan wrote:
> On booke, "struct tlbe_ref" contains host tlb mapping information
> (pfn: for guest-pfn to pfn, flags: attribute associated with this mapping)
> for a guest tlb entry. So when a guest creates a TLB entry then
> "struct tlbe_ref" is set to point to valid "pfn" and set attributes in
> "flags" field of the above said structure. When a guest TLB entry is
> invalidated then flags field of corresponding "struct tlbe_ref" is
> updated to point that this is no more valid, also we selectively clear
> some other attribute bits, example: if E500_TLB_BITMAP was set then we clear
> E500_TLB_BITMAP, if E500_TLB_TLB0 is set then we clear this.
> 
> Ideally we should clear complete "flags" as this entry is invalid and does not
> have anything to re-used. The other part of the problem is that when we use
> the same entry again then also we do not clear (started doing or-ing etc).
> 
> So far it was working because the selectively clearing mentioned above
> actually clears "flags" what was set during TLB mapping. But the problem
> starts coming when we add more attributes to this then we need to selectively
> clear them and which is not needed.
> 
> This patch we do both
>         - Clear "flags" when invalidating;
>         - Clear "flags" when reusing same entry later
> 
> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> ---
> v3-> v5
>  - New patch (found this issue when doing vfio-pci development)
> 
>  arch/powerpc/kvm/e500_mmu_host.c |   12 +++++++-----
>  1 files changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index 1c6a9d7..60f5a3c 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -217,7 +217,8 @@ void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel,
>  		}
>  		mb();
>  		vcpu_e500->g2h_tlb1_map[esel] = 0;
> -		ref->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);
> +		/* Clear flags as TLB is not backed by the host anymore */
> +		ref->flags = 0;
>  		local_irq_restore(flags);
>  	}

This breaks when you have both E500_TLB_BITMAP and E500_TLB_TLB0 set.

Instead, just convert the final E500_TLB_VALID clearing at the end into
ref->flags = 0, and convert the early return a few lines earlier into
conditional execution of the tlbil_one().

-Scott



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bharat Bhushan Sept. 20, 2013, 4:19 a.m. UTC | #2
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogV29vZCBTY290dC1CMDc0
MjENCj4gU2VudDogRnJpZGF5LCBTZXB0ZW1iZXIgMjAsIDIwMTMgMjozOCBBTQ0KPiBUbzogQmh1
c2hhbiBCaGFyYXQtUjY1Nzc3DQo+IENjOiBiZW5oQGtlcm5lbC5jcmFzaGluZy5vcmc7IGFncmFm
QHN1c2UuZGU7IHBhdWx1c0BzYW1iYS5vcmc7DQo+IGt2bUB2Z2VyLmtlcm5lbC5vcmc7IGt2bS1w
cGNAdmdlci5rZXJuZWwub3JnOyBsaW51eHBwYy1kZXZAbGlzdHMub3psYWJzLm9yZzsNCj4gQmh1
c2hhbiBCaGFyYXQtUjY1Nzc3DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggNS82IHY1XSBrdm06IGJv
b2tlOiBjbGVhciBob3N0IHRsYiByZWZlcmVuY2UgZmxhZyBvbiBndWVzdA0KPiB0bGIgaW52YWxp
ZGF0aW9uDQo+IA0KPiBPbiBUaHUsIDIwMTMtMDktMTkgYXQgMTE6MzIgKzA1MzAsIEJoYXJhdCBC
aHVzaGFuIHdyb3RlOg0KPiA+IE9uIGJvb2tlLCAic3RydWN0IHRsYmVfcmVmIiBjb250YWlucyBo
b3N0IHRsYiBtYXBwaW5nIGluZm9ybWF0aW9uDQo+ID4gKHBmbjogZm9yIGd1ZXN0LXBmbiB0byBw
Zm4sIGZsYWdzOiBhdHRyaWJ1dGUgYXNzb2NpYXRlZCB3aXRoIHRoaXMNCj4gPiBtYXBwaW5nKSBm
b3IgYSBndWVzdCB0bGIgZW50cnkuIFNvIHdoZW4gYSBndWVzdCBjcmVhdGVzIGEgVExCIGVudHJ5
DQo+ID4gdGhlbiAic3RydWN0IHRsYmVfcmVmIiBpcyBzZXQgdG8gcG9pbnQgdG8gdmFsaWQgInBm
biIgYW5kIHNldA0KPiA+IGF0dHJpYnV0ZXMgaW4gImZsYWdzIiBmaWVsZCBvZiB0aGUgYWJvdmUg
c2FpZCBzdHJ1Y3R1cmUuIFdoZW4gYSBndWVzdA0KPiA+IFRMQiBlbnRyeSBpcyBpbnZhbGlkYXRl
ZCB0aGVuIGZsYWdzIGZpZWxkIG9mIGNvcnJlc3BvbmRpbmcgInN0cnVjdA0KPiA+IHRsYmVfcmVm
IiBpcyB1cGRhdGVkIHRvIHBvaW50IHRoYXQgdGhpcyBpcyBubyBtb3JlIHZhbGlkLCBhbHNvIHdl
DQo+ID4gc2VsZWN0aXZlbHkgY2xlYXIgc29tZSBvdGhlciBhdHRyaWJ1dGUgYml0cywgZXhhbXBs
ZTogaWYNCj4gPiBFNTAwX1RMQl9CSVRNQVAgd2FzIHNldCB0aGVuIHdlIGNsZWFyIEU1MDBfVExC
X0JJVE1BUCwgaWYgRTUwMF9UTEJfVExCMCBpcyBzZXQNCj4gdGhlbiB3ZSBjbGVhciB0aGlzLg0K
PiA+DQo+ID4gSWRlYWxseSB3ZSBzaG91bGQgY2xlYXIgY29tcGxldGUgImZsYWdzIiBhcyB0aGlz
IGVudHJ5IGlzIGludmFsaWQgYW5kDQo+ID4gZG9lcyBub3QgaGF2ZSBhbnl0aGluZyB0byByZS11
c2VkLiBUaGUgb3RoZXIgcGFydCBvZiB0aGUgcHJvYmxlbSBpcw0KPiA+IHRoYXQgd2hlbiB3ZSB1
c2UgdGhlIHNhbWUgZW50cnkgYWdhaW4gdGhlbiBhbHNvIHdlIGRvIG5vdCBjbGVhciAoc3RhcnRl
ZCBkb2luZw0KPiBvci1pbmcgZXRjKS4NCj4gPg0KPiA+IFNvIGZhciBpdCB3YXMgd29ya2luZyBi
ZWNhdXNlIHRoZSBzZWxlY3RpdmVseSBjbGVhcmluZyBtZW50aW9uZWQgYWJvdmUNCj4gPiBhY3R1
YWxseSBjbGVhcnMgImZsYWdzIiB3aGF0IHdhcyBzZXQgZHVyaW5nIFRMQiBtYXBwaW5nLiBCdXQg
dGhlDQo+ID4gcHJvYmxlbSBzdGFydHMgY29taW5nIHdoZW4gd2UgYWRkIG1vcmUgYXR0cmlidXRl
cyB0byB0aGlzIHRoZW4gd2UgbmVlZA0KPiA+IHRvIHNlbGVjdGl2ZWx5IGNsZWFyIHRoZW0gYW5k
IHdoaWNoIGlzIG5vdCBuZWVkZWQuDQo+ID4NCj4gPiBUaGlzIHBhdGNoIHdlIGRvIGJvdGgNCj4g
PiAgICAgICAgIC0gQ2xlYXIgImZsYWdzIiB3aGVuIGludmFsaWRhdGluZzsNCj4gPiAgICAgICAg
IC0gQ2xlYXIgImZsYWdzIiB3aGVuIHJldXNpbmcgc2FtZSBlbnRyeSBsYXRlcg0KPiA+DQo+ID4g
U2lnbmVkLW9mZi1ieTogQmhhcmF0IEJodXNoYW4gPGJoYXJhdC5iaHVzaGFuQGZyZWVzY2FsZS5j
b20+DQo+ID4gLS0tDQo+ID4gdjMtPiB2NQ0KPiA+ICAtIE5ldyBwYXRjaCAoZm91bmQgdGhpcyBp
c3N1ZSB3aGVuIGRvaW5nIHZmaW8tcGNpIGRldmVsb3BtZW50KQ0KPiA+DQo+ID4gIGFyY2gvcG93
ZXJwYy9rdm0vZTUwMF9tbXVfaG9zdC5jIHwgICAxMiArKysrKysrLS0tLS0NCj4gPiAgMSBmaWxl
cyBjaGFuZ2VkLCA3IGluc2VydGlvbnMoKyksIDUgZGVsZXRpb25zKC0pDQo+ID4NCj4gPiBkaWZm
IC0tZ2l0IGEvYXJjaC9wb3dlcnBjL2t2bS9lNTAwX21tdV9ob3N0LmMNCj4gPiBiL2FyY2gvcG93
ZXJwYy9rdm0vZTUwMF9tbXVfaG9zdC5jDQo+ID4gaW5kZXggMWM2YTlkNy4uNjBmNWEzYyAxMDA2
NDQNCj4gPiAtLS0gYS9hcmNoL3Bvd2VycGMva3ZtL2U1MDBfbW11X2hvc3QuYw0KPiA+ICsrKyBi
L2FyY2gvcG93ZXJwYy9rdm0vZTUwMF9tbXVfaG9zdC5jDQo+ID4gQEAgLTIxNyw3ICsyMTcsOCBA
QCB2b2lkIGludmFsX2d0bGJlX29uX2hvc3Qoc3RydWN0IGt2bXBwY192Y3B1X2U1MDANCj4gKnZj
cHVfZTUwMCwgaW50IHRsYnNlbCwNCj4gPiAgCQl9DQo+ID4gIAkJbWIoKTsNCj4gPiAgCQl2Y3B1
X2U1MDAtPmcyaF90bGIxX21hcFtlc2VsXSA9IDA7DQo+ID4gLQkJcmVmLT5mbGFncyAmPSB+KEU1
MDBfVExCX0JJVE1BUCB8IEU1MDBfVExCX1ZBTElEKTsNCj4gPiArCQkvKiBDbGVhciBmbGFncyBh
cyBUTEIgaXMgbm90IGJhY2tlZCBieSB0aGUgaG9zdCBhbnltb3JlICovDQo+ID4gKwkJcmVmLT5m
bGFncyA9IDA7DQo+ID4gIAkJbG9jYWxfaXJxX3Jlc3RvcmUoZmxhZ3MpOw0KPiA+ICAJfQ0KPiAN
Cj4gVGhpcyBicmVha3Mgd2hlbiB5b3UgaGF2ZSBib3RoIEU1MDBfVExCX0JJVE1BUCBhbmQgRTUw
MF9UTEJfVExCMCBzZXQuDQoNCkkgZG8gbm90IHNlZSBhbnkgY2FzZSB3aGVyZSB3ZSBzZXQgYm90
aCBFNTAwX1RMQl9CSVRNQVAgYW5kIEU1MDBfVExCX1RMQjAuIEFsc28gd2UgaGF2ZSBub3Qgb3B0
aW1pemVkIHRoYXQgeWV0IChrZWVwaW5nIHRyYWNrIG9mIG11bHRpcGxlIHNoYWRvdyBUTEIwIGVu
dHJpZXMgZm9yIG9uZSBndWVzdCBUTEIxIGVudHJ5KQ0KDQpXZSB1c2VzIHRoZXNlIGJpdCBmbGFn
cyBvbmx5IGZvciBUTEIxIGFuZCBpZiBzaXplIG9mIHN0bGJlIGlzIDRLIHRoZW4gd2Ugc2V0IEU1
MDBfVExCX1RMQjAgIG90aGVyd2lzZSB3ZSBzZXQgRTUwMF9UTEJfQklUTUFQLiBBbHRob3VnaCBJ
IHRoaW5rIHRoYXQgRTUwMF9UTEJfQklUTUFQIHNob3VsZCBiZSBzZXQgb25seSBpZiBzdGxiZSBz
aXplIGlzIGxlc3MgdGhhbiBndGxiZSBzaXplLg0KDQo+IA0KPiBJbnN0ZWFkLCBqdXN0IGNvbnZl
cnQgdGhlIGZpbmFsIEU1MDBfVExCX1ZBTElEIGNsZWFyaW5nIGF0IHRoZSBlbmQgaW50bw0KPiBy
ZWYtPmZsYWdzID0gMCwgYW5kIGNvbnZlcnQgdGhlIGVhcmx5IHJldHVybiBhIGZldyBsaW5lcyBl
YXJsaWVyIGludG8NCj4gY29uZGl0aW9uYWwgZXhlY3V0aW9uIG9mIHRoZSB0bGJpbF9vbmUoKS4N
Cg0KVGhpcyBsb29rcyBiZXR0ZXIsIHdpbGwgc2VuZCB0aGUgcGF0Y2ggc2hvcnRseS4NCg0KVGhh
bmtzDQotQmhhcmF0DQoNCj4gDQo+IC1TY290dA0KPiANCg0K

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Scott Wood Sept. 20, 2013, 4:18 p.m. UTC | #3
On Thu, 2013-09-19 at 23:19 -0500, Bhushan Bharat-R65777 wrote:
> 
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Friday, September 20, 2013 2:38 AM
> > To: Bhushan Bharat-R65777
> > Cc: benh@kernel.crashing.org; agraf@suse.de; paulus@samba.org;
> > kvm@vger.kernel.org; kvm-ppc@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
> > Bhushan Bharat-R65777
> > Subject: Re: [PATCH 5/6 v5] kvm: booke: clear host tlb reference flag on guest
> > tlb invalidation
> > 
> > This breaks when you have both E500_TLB_BITMAP and E500_TLB_TLB0 set.
> 
> I do not see any case where we set both E500_TLB_BITMAP and
> E500_TLB_TLB0.

This would happen if you have a guest TLB1 entry that is backed by some
4K pages and some larger pages (e.g. if the guest maps CCSR with one big
TLB1 and there are varying I/O passthrough regions mapped).  It's not
common, but it's possible.

>  Also we have not optimized that yet (keeping track of
> multiple shadow TLB0 entries for one guest TLB1 entry)

This is about correctness, not optimization.

> We uses these bit flags only for TLB1 and if size of stlbe is 4K then
> we set E500_TLB_TLB0  otherwise we set E500_TLB_BITMAP. Although I
> think that E500_TLB_BITMAP should be set only if stlbe size is less
> than gtlbe size.

Why?  Even if there's only one bit set in the map, we need it to keep
track of which entry was used.

-Scott



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bharat Bhushan Sept. 20, 2013, 6:04 p.m. UTC | #4
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogV29vZCBTY290dC1CMDc0
MjENCj4gU2VudDogRnJpZGF5LCBTZXB0ZW1iZXIgMjAsIDIwMTMgOTo0OCBQTQ0KPiBUbzogQmh1
c2hhbiBCaGFyYXQtUjY1Nzc3DQo+IENjOiBXb29kIFNjb3R0LUIwNzQyMTsgYmVuaEBrZXJuZWwu
Y3Jhc2hpbmcub3JnOyBhZ3JhZkBzdXNlLmRlOw0KPiBwYXVsdXNAc2FtYmEub3JnOyBrdm1Admdl
ci5rZXJuZWwub3JnOyBrdm0tcHBjQHZnZXIua2VybmVsLm9yZzsgbGludXhwcGMtDQo+IGRldkBs
aXN0cy5vemxhYnMub3JnDQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggNS82IHY1XSBrdm06IGJvb2tl
OiBjbGVhciBob3N0IHRsYiByZWZlcmVuY2UgZmxhZyBvbiBndWVzdA0KPiB0bGIgaW52YWxpZGF0
aW9uDQo+IA0KPiBPbiBUaHUsIDIwMTMtMDktMTkgYXQgMjM6MTkgLTA1MDAsIEJodXNoYW4gQmhh
cmF0LVI2NTc3NyB3cm90ZToNCj4gPg0KPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0N
Cj4gPiA+IEZyb206IFdvb2QgU2NvdHQtQjA3NDIxDQo+ID4gPiBTZW50OiBGcmlkYXksIFNlcHRl
bWJlciAyMCwgMjAxMyAyOjM4IEFNDQo+ID4gPiBUbzogQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+
ID4gPiBDYzogYmVuaEBrZXJuZWwuY3Jhc2hpbmcub3JnOyBhZ3JhZkBzdXNlLmRlOyBwYXVsdXNA
c2FtYmEub3JnOw0KPiA+ID4ga3ZtQHZnZXIua2VybmVsLm9yZzsga3ZtLXBwY0B2Z2VyLmtlcm5l
bC5vcmc7DQo+ID4gPiBsaW51eHBwYy1kZXZAbGlzdHMub3psYWJzLm9yZzsgQmh1c2hhbiBCaGFy
YXQtUjY1Nzc3DQo+ID4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDUvNiB2NV0ga3ZtOiBib29rZTog
Y2xlYXIgaG9zdCB0bGIgcmVmZXJlbmNlDQo+ID4gPiBmbGFnIG9uIGd1ZXN0IHRsYiBpbnZhbGlk
YXRpb24NCj4gPiA+DQo+ID4gPiBUaGlzIGJyZWFrcyB3aGVuIHlvdSBoYXZlIGJvdGggRTUwMF9U
TEJfQklUTUFQIGFuZCBFNTAwX1RMQl9UTEIwIHNldC4NCj4gPg0KPiA+IEkgZG8gbm90IHNlZSBh
bnkgY2FzZSB3aGVyZSB3ZSBzZXQgYm90aCBFNTAwX1RMQl9CSVRNQVAgYW5kDQo+ID4gRTUwMF9U
TEJfVExCMC4NCj4gDQo+IFRoaXMgd291bGQgaGFwcGVuIGlmIHlvdSBoYXZlIGEgZ3Vlc3QgVExC
MSBlbnRyeSB0aGF0IGlzIGJhY2tlZCBieSBzb21lIDRLIHBhZ2VzDQo+IGFuZCBzb21lIGxhcmdl
ciBwYWdlcyAoZS5nLiBpZiB0aGUgZ3Vlc3QgbWFwcyBDQ1NSIHdpdGggb25lIGJpZw0KPiBUTEIx
IGFuZCB0aGVyZSBhcmUgdmFyeWluZyBJL08gcGFzc3Rocm91Z2ggcmVnaW9ucyBtYXBwZWQpLiAg
SXQncyBub3QgY29tbW9uLA0KPiBidXQgaXQncyBwb3NzaWJsZS4NCg0KQWdyZWUNCg0KPiANCj4g
PiAgQWxzbyB3ZSBoYXZlIG5vdCBvcHRpbWl6ZWQgdGhhdCB5ZXQgKGtlZXBpbmcgdHJhY2sgb2Yg
bXVsdGlwbGUgc2hhZG93DQo+ID4gVExCMCBlbnRyaWVzIGZvciBvbmUgZ3Vlc3QgVExCMSBlbnRy
eSkNCj4gDQo+IFRoaXMgaXMgYWJvdXQgY29ycmVjdG5lc3MsIG5vdCBvcHRpbWl6YXRpb24uDQo+
IA0KPiA+IFdlIHVzZXMgdGhlc2UgYml0IGZsYWdzIG9ubHkgZm9yIFRMQjEgYW5kIGlmIHNpemUg
b2Ygc3RsYmUgaXMgNEsgdGhlbg0KPiA+IHdlIHNldCBFNTAwX1RMQl9UTEIwICBvdGhlcndpc2Ug
d2Ugc2V0IEU1MDBfVExCX0JJVE1BUC4gQWx0aG91Z2ggSQ0KPiA+IHRoaW5rIHRoYXQgRTUwMF9U
TEJfQklUTUFQIHNob3VsZCBiZSBzZXQgb25seSBpZiBzdGxiZSBzaXplIGlzIGxlc3MNCj4gPiB0
aGFuIGd0bGJlIHNpemUuDQo+IA0KPiBXaHk/ICBFdmVuIGlmIHRoZXJlJ3Mgb25seSBvbmUgYml0
IHNldCBpbiB0aGUgbWFwLCB3ZSBuZWVkIGl0IHRvIGtlZXAgdHJhY2sgb2YNCj4gd2hpY2ggZW50
cnkgd2FzIHVzZWQuDQoNCklmIHRoZXJlIGlzIG9uZSBlbnRyeSB0aGVuIHdpbGwgbm90IHRoaXMg
YmUgc2ltcGxlL2Zhc3RlciB0byBub3QgbG9va3VwIGJpdG1hcCBhbmQgZ3Vlc3QtPmhvc3QgYXJy
YXk/DQpBIGZsYWcgaW5kaWNhdGUgaXQgaXMgMToxIG1hcCBhbmQgdGhpcyBpcyBwaHlzaWNhbCBh
ZGRyZXNzLg0KDQotQmhhcmF0DQoNCj4gDQo+IC1TY290dA0KPiANCg0K

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Scott Wood Sept. 20, 2013, 6:08 p.m. UTC | #5
On Fri, 2013-09-20 at 13:04 -0500, Bhushan Bharat-R65777 wrote:
> 
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Friday, September 20, 2013 9:48 PM
> > To: Bhushan Bharat-R65777
> > Cc: Wood Scott-B07421; benh@kernel.crashing.org; agraf@suse.de;
> > paulus@samba.org; kvm@vger.kernel.org; kvm-ppc@vger.kernel.org; linuxppc-
> > dev@lists.ozlabs.org
> > Subject: Re: [PATCH 5/6 v5] kvm: booke: clear host tlb reference flag on guest
> > tlb invalidation
> > 
> > On Thu, 2013-09-19 at 23:19 -0500, Bhushan Bharat-R65777 wrote:
> > > We uses these bit flags only for TLB1 and if size of stlbe is 4K then
> > > we set E500_TLB_TLB0  otherwise we set E500_TLB_BITMAP. Although I
> > > think that E500_TLB_BITMAP should be set only if stlbe size is less
> > > than gtlbe size.
> > 
> > Why?  Even if there's only one bit set in the map, we need it to keep track of
> > which entry was used.
> 
> If there is one entry then will not this be simple/faster to not lookup bitmap and guest->host array?
> A flag indicate it is 1:1 map and this is physical address.

The difference would be negligible, and you'd have added overhead (both
runtime and complexity) of making this a special case.

-Scott



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bharat Bhushan Sept. 20, 2013, 6:11 p.m. UTC | #6
> -----Original Message-----

> From: Wood Scott-B07421

> Sent: Friday, September 20, 2013 11:38 PM

> To: Bhushan Bharat-R65777

> Cc: Wood Scott-B07421; benh@kernel.crashing.org; agraf@suse.de;

> paulus@samba.org; kvm@vger.kernel.org; kvm-ppc@vger.kernel.org; linuxppc-

> dev@lists.ozlabs.org

> Subject: Re: [PATCH 5/6 v5] kvm: booke: clear host tlb reference flag on guest

> tlb invalidation

> 

> On Fri, 2013-09-20 at 13:04 -0500, Bhushan Bharat-R65777 wrote:

> >

> > > -----Original Message-----

> > > From: Wood Scott-B07421

> > > Sent: Friday, September 20, 2013 9:48 PM

> > > To: Bhushan Bharat-R65777

> > > Cc: Wood Scott-B07421; benh@kernel.crashing.org; agraf@suse.de;

> > > paulus@samba.org; kvm@vger.kernel.org; kvm-ppc@vger.kernel.org;

> > > linuxppc- dev@lists.ozlabs.org

> > > Subject: Re: [PATCH 5/6 v5] kvm: booke: clear host tlb reference

> > > flag on guest tlb invalidation

> > >

> > > On Thu, 2013-09-19 at 23:19 -0500, Bhushan Bharat-R65777 wrote:

> > > > We uses these bit flags only for TLB1 and if size of stlbe is 4K

> > > > then we set E500_TLB_TLB0  otherwise we set E500_TLB_BITMAP.

> > > > Although I think that E500_TLB_BITMAP should be set only if stlbe

> > > > size is less than gtlbe size.

> > >

> > > Why?  Even if there's only one bit set in the map, we need it to

> > > keep track of which entry was used.

> >

> > If there is one entry then will not this be simple/faster to not lookup bitmap

> and guest->host array?

> > A flag indicate it is 1:1 map and this is physical address.

> 

> The difference would be negligible, and you'd have added overhead (both runtime

> and complexity) of making this a special case.


May be you are right , I will see if I can give a try :)
BTW I have already sent v6 of this patch.

-Bharat

> 

> -Scott

>
diff mbox

Patch

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 1c6a9d7..60f5a3c 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -217,7 +217,8 @@  void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel,
 		}
 		mb();
 		vcpu_e500->g2h_tlb1_map[esel] = 0;
-		ref->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);
+		/* Clear flags as TLB is not backed by the host anymore */
+		ref->flags = 0;
 		local_irq_restore(flags);
 	}
 
@@ -227,7 +228,8 @@  void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel,
 		 * rarely and is not worth optimizing. Invalidate everything.
 		 */
 		kvmppc_e500_tlbil_all(vcpu_e500);
-		ref->flags &= ~(E500_TLB_TLB0 | E500_TLB_VALID);
+		/* Clear flags as TLB is not backed by the host anymore */
+		ref->flags = 0;
 	}
 
 	/* Already invalidated in between */
@@ -237,8 +239,8 @@  void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel,
 	/* Guest tlbe is backed by at most one host tlbe per shadow pid. */
 	kvmppc_e500_tlbil_one(vcpu_e500, gtlbe);
 
-	/* Mark the TLB as not backed by the host anymore */
-	ref->flags &= ~E500_TLB_VALID;
+	/* Clear flags as TLB is not backed by the host anymore */
+	ref->flags = 0;
 }
 
 static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
@@ -251,7 +253,7 @@  static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
 					 pfn_t pfn)
 {
 	ref->pfn = pfn;
-	ref->flags |= E500_TLB_VALID;
+	ref->flags = E500_TLB_VALID;
 
 	if (tlbe_is_writable(gtlbe))
 		kvm_set_pfn_dirty(pfn);