Message ID | 58E5FC0A.4010809@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 06.04.17 at 10:27, <yu.c.zhang@linux.intel.com> wrote: > On 4/6/2017 3:48 PM, Jan Beulich wrote: >>>>> On 05.04.17 at 20:04, <yu.c.zhang@linux.intel.com> wrote: >>> --- a/xen/common/memory.c >>> +++ b/xen/common/memory.c >>> @@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) >>> put_gfn(d, gmfn); >>> return 1; >>> } >>> + if ( unlikely(p2mt == p2m_ioreq_server) ) >>> + p2m_change_type_one(d, gmfn, >>> + p2m_ioreq_server, p2m_ram_rw); >>> + >>> #else >>> mfn = gfn_to_mfn(d, _gfn(gmfn)); >>> #endif >> To be honest, this looks more like a quick hack than a proper solution >> at the first glance. To me it would seem preferable if the count was > > Yeah, right. :) > >> adjusted at the point the P2M entry is being replaced (i.e. down the >> call stack from guest_physmap_remove_page()). The closer to the >> actual changing of the P2M entry, the less likely you miss any call >> paths (as was already explained by George while suggesting to put >> the accounting into the leaf most EPT/NPT functions). > > Well, I thought I have explained the reason why I have always been > hesitating to do the count in > atomic_write_ept_entry(). But seems I did not make it clear: Well, there was no reason to re-explain. I understand your reasoning, and I understand George's. Hence my request to move it _closer_ to the leaf function, not specifically to move it _into_ there. > But I had to admit I did not think of the extreme scenarios raised by > George - I had always assumed an p2m_ioreq_server > page will not be allocated to the balloon driver when it is in use. > > So here I have another proposal - we shall not allow a p2m_ioreq_server > being ballooned out. I mean, if some bug in > kernel really allocates a p2m_ioreq_server page to a balloon driver, or > if the driver is a malicious one which does not > tell the device model this gfn shall be no longer emulated, the > hypervisor shall let the ballooning fail for this gfn. After > all, if such situation happens, the guest or the device model already > have bugs, and these last 2 patches are to make > sure that even if there's bug in guest/device model, xen will help do > the cleanup, instead of to tolerate guest bugs. > > If you think this is reasonable, I have drafted a patch, like this: Well, that's still the same hack as before (merely extended to PoD code), isn't it? I'd still prefer the accounting to be got right instead of the page remove attempt to be failed. Jan
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index d5fea72..ff726ad 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -606,7 +606,8 @@ p2m_pod_decrease_reservation(struct domain *d, BUG_ON(p2m->pod.entry_count < 0); pod -= n; } - else if ( steal_for_cache && p2m_is_ram(t) ) + else if ( steal_for_cache && p2m_is_ram(t) && + (t != p2m_ioreq_server) ) { /* * If we need less than 1 << cur_order, we may end up stealing diff --git a/xen/common/memory.c b/xen/common/memory.c index 7dbddda..40d5545 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -288,6 +288,14 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) put_gfn(d, gmfn); return 1; } + if ( unlikely(p2mt == p2m_ioreq_server) ) + { + put_gfn(d, gmfn); + gdprintk(XENLOG_INFO, "Domain %u page %lx cannot be removed.\n", + d->domain_id, gmfn); + return 0; + } + #else mfn = gfn_to_mfn(d, _gfn(gmfn));