Message ID | 20221123002030.92716-3-dwmw2@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] KVM: x86/xen: Validate port number in SCHEDOP_poll | expand |
On Wed, 2022-11-23 at 00:20 +0000, David Woodhouse wrote: > From: David Woodhouse <dwmw@amazon.co.uk> > > In the case where a GPC is refreshed to a different location within the > same page, we didn't bother to update it. Mostly we don't need to, but > since the ->khva field also includes the offset within the page, that > does have to be updated. > > Fixes: 982ed0de4753 ("KVM: Reinstate gfn_to_pfn_cache with invalidation support") Hm, wait. That commit wasn't actually broken because at that point the page offset was included in the uhva too, so the uhva *did* change and we'd (gratuitously) take the slower path through hva_to_pfn_retry() when the GPA moved within the same page. So I think this should actually be: Fixes: 3ba2c95ea180 ("KVM: Do not incorporate page offset into gfn=>pfn cache user address") Which means it's only relevant back to v6.0 stable, not all the way back to v5.17. > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> > Reviewed-by: Paul Durrant <paul@xen.org> > Cc: stable@kernel.org > > --- > virt/kvm/pfncache.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c > index bd4a46aee384..5f83321bfd2a 100644 > --- a/virt/kvm/pfncache.c > +++ b/virt/kvm/pfncache.c > @@ -297,7 +297,12 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, > if (!gpc->valid || old_uhva != gpc->uhva) { > ret = hva_to_pfn_retry(kvm, gpc); > } else { > - /* If the HVA→PFN mapping was already valid, don't unmap it. */ > + /* > + * If the HVA→PFN mapping was already valid, don't unmap it. > + * But do update gpc->khva because the offset within the page > + * may have changed. > + */ > + gpc->khva = old_khva + page_offset; > old_pfn = KVM_PFN_ERR_FAULT; > old_khva = NULL; > ret = 0; >
On Wed, Nov 23, 2022, David Woodhouse wrote: > On Wed, 2022-11-23 at 00:20 +0000, David Woodhouse wrote: > > From: David Woodhouse <dwmw@amazon.co.uk> > > > > In the case where a GPC is refreshed to a different location within the > > same page, we didn't bother to update it. Mostly we don't need to, but > > since the ->khva field also includes the offset within the page, that > > does have to be updated. > > > > Fixes: 982ed0de4753 ("KVM: Reinstate gfn_to_pfn_cache with invalidation support") > > Hm, wait. That commit wasn't actually broken because at that point the > page offset was included in the uhva too, so the uhva *did* change and > we'd (gratuitously) take the slower path through hva_to_pfn_retry() > when the GPA moved within the same page. > > So I think this should actually be: > > Fixes: 3ba2c95ea180 ("KVM: Do not incorporate page offset into gfn=>pfn cache user address") Ya. > Which means it's only relevant back to v6.0 stable, not all the way > back to v5.17. Probably a moot point in the long run since that commit was tagged for stable@ too, in order to simplify the fixes that followed. > > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> > > Reviewed-by: Paul Durrant <paul@xen.org> > > Cc: stable@kernel.org > > > > --- Reviewed-by: Sean Christopherson <seanjc@google.com> > > virt/kvm/pfncache.c | 7 ++++++- > > 1 file changed, 6 insertions(+), 1 deletion(-) > > > > diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c > > index bd4a46aee384..5f83321bfd2a 100644 > > --- a/virt/kvm/pfncache.c > > +++ b/virt/kvm/pfncache.c > > @@ -297,7 +297,12 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, > > if (!gpc->valid || old_uhva != gpc->uhva) { > > ret = hva_to_pfn_retry(kvm, gpc); > > } else { > > - /* If the HVA→PFN mapping was already valid, don't unmap it. */ > > + /* > > + * If the HVA→PFN mapping was already valid, don't unmap it. > > + * But do update gpc->khva because the offset within the page > > + * may have changed. > > + */ > > + gpc->khva = old_khva + page_offset; If/when we rework the APIs, another possible approach would be to store only the the page aligned address, e.g. force the user to pass in offset+len by doing something like: r = kvm_gpc_lock(...); if (r) return r; my_struct = kvm_gpc_kmap(..., offset, len);
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index bd4a46aee384..5f83321bfd2a 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -297,7 +297,12 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, if (!gpc->valid || old_uhva != gpc->uhva) { ret = hva_to_pfn_retry(kvm, gpc); } else { - /* If the HVA→PFN mapping was already valid, don't unmap it. */ + /* + * If the HVA→PFN mapping was already valid, don't unmap it. + * But do update gpc->khva because the offset within the page + * may have changed. + */ + gpc->khva = old_khva + page_offset; old_pfn = KVM_PFN_ERR_FAULT; old_khva = NULL; ret = 0;