diff mbox series

[v6,4/5] mm: add 'is_special_page' inline function...

Message ID 20200310174917.1514-5-paul@xen.org (mailing list archive)
State New, archived
Headers show
Series remove one more shared xenheap page: shared_info | expand

Commit Message

Paul Durrant March 10, 2020, 5:49 p.m. UTC
From: Paul Durrant <pdurrant@amazon.com>

... to cover xenheap and PGC_extra pages.

PGC_extra pages are intended to hold data structures that are associated
with a domain and may be mapped by that domain. They should not be treated
as 'normal' guest pages (i.e. RAM or page tables). Hence, in many cases
where code currently tests is_xen_heap_page() it should also check for
the PGC_extra bit in 'count_info'.

This patch therefore defines is_special_page() to cover both cases and
converts tests if is_xen_heap_page() to is_special_page() where
appropriate.

Signed-off-by: Paul Durrant <paul@xen.org>
Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>

In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
needs to check for PGC_extra pages too.

v6:
 - Convert open-coded checks of PGC_xen_heap to use is_special_page()
   where appropriate

v4:
 - Use inline function instead of macro
 - Add missing conversions from is_xen_heap_page()

v3:
 - Delete obsolete comment.

v2:
 - New in v2
---
 xen/arch/x86/domctl.c           |  2 +-
 xen/arch/x86/mm.c               |  9 ++++-----
 xen/arch/x86/mm/altp2m.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c   |  3 +--
 xen/arch/x86/mm/p2m-pod.c       | 10 ++++++----
 xen/arch/x86/mm/shadow/common.c | 13 ++++++++-----
 xen/arch/x86/mm/shadow/multi.c  |  2 +-
 xen/arch/x86/tboot.c            |  4 ++--
 xen/include/xen/mm.h            |  5 +++++
 9 files changed, 29 insertions(+), 21 deletions(-)

Comments

Jan Beulich March 17, 2020, 1:06 p.m. UTC | #1
On 10.03.2020 18:49, paul@xen.org wrote:
> In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
> needs to check for PGC_extra pages too.

"Extra" pages being the designated replacement for Xen heap ones,
I think it should. Then again the earlier

    if ( (owner = page_get_owner_and_reference(pg)) )

should succeed on them (as much as it should for Xen heap pages
shared with a domain), so perhaps simply say something to this
effect in the description?

> @@ -4216,8 +4216,7 @@ int steal_page(
>      if ( !(owner = page_get_owner_and_reference(page)) )
>          goto fail;
>  
> -    if ( owner != d || is_xen_heap_page(page) ||
> -         (page->count_info & PGC_extra) )
> +    if ( owner != d || is_special_page(page) )
>          goto fail_put;
>  
>      /*

A few hundred lines down from here in xenmem_add_to_physmap_one()
there is a use of is_xen_heap_mfn(). Any reason that doesn't get
converted? Same question - because of the code being similar -
then goes for mm/p2m.c:p2m_add_foreign().

> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
>  
>          n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
>          for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
> -            if ( !(page->count_info & PGC_allocated) ||
> -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
> +            if ( is_special_page(page) ||
> +                 !(page->count_info & PGC_allocated) ||
> +                 (page->count_info & PGC_page_table) ||
>                   (page->count_info & PGC_count_mask) > max_ref )
>                  goto out;
>      }
> @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
>           * If this is ram, and not a pagetable or from the xen heap, and
>           * probably not mapped elsewhere, map it; otherwise, skip.
>           */
> -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
> -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
> +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
> +             (pg->count_info & PGC_allocated) &&
> +             !(pg->count_info & PGC_page_table) &&
>               ((pg->count_info & PGC_count_mask) <= max_ref) )
>              map[i] = map_domain_page(mfns[i]);
>          else

I appreciate your desire to use the inline function you add, and
I also appreciate that you likely prefer to not make the other
suggested change (at least not right here), but then I think the
commit message would better gain a remark regarding the
suspicious uses of PGC_page_table here. I continue to think that
they should be dropped as being pointless. In any event I fear
the resulting code will be less efficient, as I'm unconvinced
that the compiler will fold the now split bit checks.

> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v,
>       * caching attributes in the shadows to match what was asked for.
>       */
>      if ( (level == 1) && is_hvm_domain(d) &&
> -         !is_xen_heap_mfn(target_mfn) )
> +         !is_special_page(mfn_to_page(target_mfn)) )

Careful - is_xen_heap_mfn() also includes an mfn_valid() check.
Code a few lines up from here suggests that MMIO MFNs can make
it here.

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -285,6 +285,11 @@ extern struct domain *dom_cow;
>  
>  #include <asm/mm.h>
>  
> +static inline bool is_special_page(const struct page_info *page)
> +{
> +    return is_xen_heap_page(page) || (page->count_info & PGC_extra);

Seeing Arm32's implementation I understand why you need to use
|| here; it's a pity the two checks can't be folded. Hopefully
at least here the compiler recognizes the opportunity.

Jan
Paul Durrant March 17, 2020, 2:47 p.m. UTC | #2
> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 17 March 2020 13:07
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; Durrant, Paul <pdurrant@amazon.co.uk>; Tamas K Lengyel
> <tamas@tklengyel.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau
> Monné <roger.pau@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim Deegan <tim@xen.org>
> Subject: RE: [EXTERNAL] [PATCH v6 4/5] mm: add 'is_special_page' inline function...
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open
> attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 10.03.2020 18:49, paul@xen.org wrote:
> > In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
> > needs to check for PGC_extra pages too.
> 
> "Extra" pages being the designated replacement for Xen heap ones,
> I think it should. Then again the earlier
> 
>     if ( (owner = page_get_owner_and_reference(pg)) )
> 
> should succeed on them (as much as it should for Xen heap pages
> shared with a domain), so perhaps simply say something to this
> effect in the description?
> 
> > @@ -4216,8 +4216,7 @@ int steal_page(
> >      if ( !(owner = page_get_owner_and_reference(page)) )
> >          goto fail;
> >
> > -    if ( owner != d || is_xen_heap_page(page) ||
> > -         (page->count_info & PGC_extra) )
> > +    if ( owner != d || is_special_page(page) )
> >          goto fail_put;
> >
> >      /*
> 
> A few hundred lines down from here in xenmem_add_to_physmap_one()
> there is a use of is_xen_heap_mfn(). Any reason that doesn't get
> converted? Same question - because of the code being similar -
> then goes for mm/p2m.c:p2m_add_foreign().
> 

I'll check again.

> > --- a/xen/arch/x86/mm/p2m-pod.c
> > +++ b/xen/arch/x86/mm/p2m-pod.c
> > @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
> >
> >          n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
> >          for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
> > -            if ( !(page->count_info & PGC_allocated) ||
> > -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
> > +            if ( is_special_page(page) ||
> > +                 !(page->count_info & PGC_allocated) ||
> > +                 (page->count_info & PGC_page_table) ||
> >                   (page->count_info & PGC_count_mask) > max_ref )
> >                  goto out;
> >      }
> > @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
> >           * If this is ram, and not a pagetable or from the xen heap, and
> >           * probably not mapped elsewhere, map it; otherwise, skip.
> >           */
> > -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
> > -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
> > +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
> > +             (pg->count_info & PGC_allocated) &&
> > +             !(pg->count_info & PGC_page_table) &&
> >               ((pg->count_info & PGC_count_mask) <= max_ref) )
> >              map[i] = map_domain_page(mfns[i]);
> >          else
> 
> I appreciate your desire to use the inline function you add, and
> I also appreciate that you likely prefer to not make the other
> suggested change (at least not right here), but then I think the
> commit message would better gain a remark regarding the
> suspicious uses of PGC_page_table here.

What's suspicious about it? I now realise the above comment also needs fixing.

> I continue to think that
> they should be dropped as being pointless. In any event I fear
> the resulting code will be less efficient, as I'm unconvinced
> that the compiler will fold the now split bit checks.
> 

I could go back to defining is_special_page() as a macro.

> > --- a/xen/arch/x86/mm/shadow/multi.c
> > +++ b/xen/arch/x86/mm/shadow/multi.c
> > @@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v,
> >       * caching attributes in the shadows to match what was asked for.
> >       */
> >      if ( (level == 1) && is_hvm_domain(d) &&
> > -         !is_xen_heap_mfn(target_mfn) )
> > +         !is_special_page(mfn_to_page(target_mfn)) )
> 
> Careful - is_xen_heap_mfn() also includes an mfn_valid() check.
> Code a few lines up from here suggests that MMIO MFNs can make
> it here.
> 

Ok.

> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -285,6 +285,11 @@ extern struct domain *dom_cow;
> >
> >  #include <asm/mm.h>
> >
> > +static inline bool is_special_page(const struct page_info *page)
> > +{
> > +    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
> 
> Seeing Arm32's implementation I understand why you need to use
> || here; it's a pity the two checks can't be folded. Hopefully
> at least here the compiler recognizes the opportunity.
> 

Yes.

  Paul

> Jan
Jan Beulich March 17, 2020, 3:01 p.m. UTC | #3
On 17.03.2020 15:47, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 17 March 2020 13:07
>>
>> On 10.03.2020 18:49, paul@xen.org wrote:
>>> --- a/xen/arch/x86/mm/p2m-pod.c
>>> +++ b/xen/arch/x86/mm/p2m-pod.c
>>> @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
>>>
>>>          n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
>>>          for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
>>> -            if ( !(page->count_info & PGC_allocated) ||
>>> -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
>>> +            if ( is_special_page(page) ||
>>> +                 !(page->count_info & PGC_allocated) ||
>>> +                 (page->count_info & PGC_page_table) ||
>>>                   (page->count_info & PGC_count_mask) > max_ref )
>>>                  goto out;
>>>      }
>>> @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
>>>           * If this is ram, and not a pagetable or from the xen heap, and
>>>           * probably not mapped elsewhere, map it; otherwise, skip.
>>>           */
>>> -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
>>> -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
>>> +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
>>> +             (pg->count_info & PGC_allocated) &&
>>> +             !(pg->count_info & PGC_page_table) &&
>>>               ((pg->count_info & PGC_count_mask) <= max_ref) )
>>>              map[i] = map_domain_page(mfns[i]);
>>>          else
>>
>> I appreciate your desire to use the inline function you add, and
>> I also appreciate that you likely prefer to not make the other
>> suggested change (at least not right here), but then I think the
>> commit message would better gain a remark regarding the
>> suspicious uses of PGC_page_table here.
> 
> What's suspicious about it?

Hmm, looks like I was wrongly remembering shadow code to be setting
this on the shadows of guest page tables, rather than on the guest
page tables themselves. I'm sorry for the noise. (Keeping the bit-
is-set tests together may still be a good idea though code generation
wise, unless you know that all halfway recent compiler versions can
fold the code fine in its current shape.)

> I now realise the above comment also needs fixing.

Oh, indeed.

>> I continue to think that
>> they should be dropped as being pointless. In any event I fear
>> the resulting code will be less efficient, as I'm unconvinced
>> that the compiler will fold the now split bit checks.
> 
> I could go back to defining is_special_page() as a macro.

I don't think this would make a difference.

Jan
Julien Grall March 17, 2020, 3:23 p.m. UTC | #4
On 17/03/2020 13:06, Jan Beulich wrote:
> On 10.03.2020 18:49, paul@xen.org wrote:
>> In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
>> needs to check for PGC_extra pages too.
> 
> "Extra" pages being the designated replacement for Xen heap ones,
> I think it should. Then again the earlier
> 
>      if ( (owner = page_get_owner_and_reference(pg)) )
> 
> should succeed on them (as much as it should for Xen heap pages
> shared with a domain), so perhaps simply say something to this
> effect in the description?
> 
>> @@ -4216,8 +4216,7 @@ int steal_page(
>>       if ( !(owner = page_get_owner_and_reference(page)) )
>>           goto fail;
>>   
>> -    if ( owner != d || is_xen_heap_page(page) ||
>> -         (page->count_info & PGC_extra) )
>> +    if ( owner != d || is_special_page(page) )
>>           goto fail_put;
>>   
>>       /*
> 
> A few hundred lines down from here in xenmem_add_to_physmap_one()
> there is a use of is_xen_heap_mfn(). Any reason that doesn't get
> converted? Same question - because of the code being similar -
> then goes for mm/p2m.c:p2m_add_foreign().
> 
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
>>   
>>           n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
>>           for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
>> -            if ( !(page->count_info & PGC_allocated) ||
>> -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
>> +            if ( is_special_page(page) ||
>> +                 !(page->count_info & PGC_allocated) ||
>> +                 (page->count_info & PGC_page_table) ||
>>                    (page->count_info & PGC_count_mask) > max_ref )
>>                   goto out;
>>       }
>> @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
>>            * If this is ram, and not a pagetable or from the xen heap, and
>>            * probably not mapped elsewhere, map it; otherwise, skip.
>>            */
>> -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
>> -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
>> +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
>> +             (pg->count_info & PGC_allocated) &&
>> +             !(pg->count_info & PGC_page_table) &&
>>                ((pg->count_info & PGC_count_mask) <= max_ref) )
>>               map[i] = map_domain_page(mfns[i]);
>>           else
> 
> I appreciate your desire to use the inline function you add, and
> I also appreciate that you likely prefer to not make the other
> suggested change (at least not right here), but then I think the
> commit message would better gain a remark regarding the
> suspicious uses of PGC_page_table here. I continue to think that
> they should be dropped as being pointless. In any event I fear
> the resulting code will be less efficient, as I'm unconvinced
> that the compiler will fold the now split bit checks.
> 
>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> @@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v,
>>        * caching attributes in the shadows to match what was asked for.
>>        */
>>       if ( (level == 1) && is_hvm_domain(d) &&
>> -         !is_xen_heap_mfn(target_mfn) )
>> +         !is_special_page(mfn_to_page(target_mfn)) )
> 
> Careful - is_xen_heap_mfn() also includes an mfn_valid() check.
> Code a few lines up from here suggests that MMIO MFNs can make
> it here.
> 
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -285,6 +285,11 @@ extern struct domain *dom_cow;
>>   
>>   #include <asm/mm.h>
>>   
>> +static inline bool is_special_page(const struct page_info *page)
>> +{
>> +    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
> 
> Seeing Arm32's implementation I understand why you need to use
> || here; it's a pity the two checks can't be folded. Hopefully
> at least here the compiler recognizes the opportunity.

Note this is not a request for this series :).

I noticed this oddity recently. I have been pondering whether we should 
aim to have PGC_xen_heap even when using split (like on Arm32).

This would avoid problem like the offline code, which is common code, to 
be broken on platform using split xenheap.

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ed86762fa6..add70126b9 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -394,7 +394,7 @@  long arch_do_domctl(
             page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
 
             if ( unlikely(!page) ||
-                 unlikely(is_xen_heap_page(page)) )
+                 unlikely(is_special_page(page)) )
             {
                 if ( unlikely(p2m_is_broken(t)) )
                     type = XEN_DOMCTL_PFINFO_BROKEN;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ba7563ed3c..353bde5c2c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1014,7 +1014,7 @@  get_page_from_l1e(
         unsigned long cacheattr = pte_flags_to_cacheattr(l1f);
         int err;
 
-        if ( is_xen_heap_page(page) )
+        if ( is_special_page(page) )
         {
             if ( write )
                 put_page_type(page);
@@ -2447,7 +2447,7 @@  static int cleanup_page_mappings(struct page_info *page)
     {
         page->count_info &= ~PGC_cacheattr_mask;
 
-        BUG_ON(is_xen_heap_page(page));
+        BUG_ON(is_special_page(page));
 
         rc = update_xen_mappings(mfn, 0);
     }
@@ -2477,7 +2477,7 @@  static int cleanup_page_mappings(struct page_info *page)
                 rc = rc2;
         }
 
-        if ( likely(!is_xen_heap_page(page)) )
+        if ( likely(!is_special_page(page)) )
         {
             ASSERT((page->u.inuse.type_info &
                     (PGT_type_mask | PGT_count_mask)) == PGT_writable_page);
@@ -4216,8 +4216,7 @@  int steal_page(
     if ( !(owner = page_get_owner_and_reference(page)) )
         goto fail;
 
-    if ( owner != d || is_xen_heap_page(page) ||
-         (page->count_info & PGC_extra) )
+    if ( owner != d || is_special_page(page) )
         goto fail_put;
 
     /*
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 50768f2547..c091b03ea3 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -77,7 +77,7 @@  int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn)
      * pageable() predicate for this, due to it having the same properties
      * that we want.
      */
-    if ( !p2m_is_pageable(p2mt) || is_xen_heap_page(pg) )
+    if ( !p2m_is_pageable(p2mt) || is_special_page(pg) )
     {
         rc = -EINVAL;
         goto err;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3835bc928f..f49f27a3ef 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -840,9 +840,8 @@  static int nominate_page(struct domain *d, gfn_t gfn,
     if ( !p2m_is_sharable(p2mt) )
         goto out;
 
-    /* Skip xen heap pages */
     page = mfn_to_page(mfn);
-    if ( !page || is_xen_heap_page(page) )
+    if ( !page || is_special_page(page) )
         goto out;
 
     /* Check if there are mem_access/remapped altp2m entries for this page */
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 2a7b8c117b..834375ac33 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -749,8 +749,9 @@  p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
 
         n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
         for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
-            if ( !(page->count_info & PGC_allocated) ||
-                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
+            if ( is_special_page(page) ||
+                 !(page->count_info & PGC_allocated) ||
+                 (page->count_info & PGC_page_table) ||
                  (page->count_info & PGC_count_mask) > max_ref )
                 goto out;
     }
@@ -886,8 +887,9 @@  p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
          * If this is ram, and not a pagetable or from the xen heap, and
          * probably not mapped elsewhere, map it; otherwise, skip.
          */
-        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
-             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
+        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
+             (pg->count_info & PGC_allocated) &&
+             !(pg->count_info & PGC_page_table) &&
              ((pg->count_info & PGC_count_mask) <= max_ref) )
             map[i] = map_domain_page(mfns[i]);
         else
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index cba3ab1eba..5431142704 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2087,19 +2087,22 @@  static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn)
          * The qemu helper process has an untyped mapping of this dom's RAM
          * and the HVM restore program takes another.
          * Also allow one typed refcount for
-         * - Xen heap pages, to match share_xen_page_with_guest(),
-         * - ioreq server pages, to match prepare_ring_for_helper().
+         * - special pages, which are explicitly referenced and mapped by
+         *   Xen.
+         * - ioreq server pages, which may be special pages or normal
+         *   guest pages with an extra reference taken by
+         *   prepare_ring_for_helper().
          */
         if ( !(shadow_mode_external(d)
                && (page->count_info & PGC_count_mask) <= 3
                && ((page->u.inuse.type_info & PGT_count_mask)
-                   == (is_xen_heap_page(page) ||
+                   == (is_special_page(page) ||
                        (is_hvm_domain(d) && is_ioreq_server_page(d, page))))) )
             printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn
-                   " (gfn %"PRI_gfn"): c=%lx t=%lx x=%d i=%d\n",
+                   " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n",
                    mfn_x(gmfn), gfn_x(gfn),
                    page->count_info, page->u.inuse.type_info,
-                   !!is_xen_heap_page(page),
+                   is_special_page(page),
                    (is_hvm_domain(d) && is_ioreq_server_page(d, page)));
     }
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 26798b317c..ac19d203d7 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -559,7 +559,7 @@  _sh_propagate(struct vcpu *v,
      * caching attributes in the shadows to match what was asked for.
      */
     if ( (level == 1) && is_hvm_domain(d) &&
-         !is_xen_heap_mfn(target_mfn) )
+         !is_special_page(mfn_to_page(target_mfn)) )
     {
         int type;
 
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 8c232270b4..3224d1684b 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -189,7 +189,7 @@  static void update_pagetable_mac(vmac_ctx_t *ctx)
 
         if ( !mfn_valid(_mfn(mfn)) )
             continue;
-        if ( is_page_in_use(page) && !is_xen_heap_page(page) )
+        if ( is_page_in_use(page) && !is_special_page(page) )
         {
             if ( page->count_info & PGC_page_table )
             {
@@ -289,7 +289,7 @@  static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE],
                               + 3 * PAGE_SIZE)) )
             continue; /* skip tboot and its page tables */
 
-        if ( is_page_in_use(page) && is_xen_heap_page(page) )
+        if ( is_page_in_use(page) && is_special_page(page) )
         {
             void *pg;
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 0769e376d2..bbe7edeb34 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -285,6 +285,11 @@  extern struct domain *dom_cow;
 
 #include <asm/mm.h>
 
+static inline bool is_special_page(const struct page_info *page)
+{
+    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
+}
+
 #ifndef page_list_entry
 struct page_list_head
 {