diff mbox series

[v6,02/13] mm: remove extra ZONE_DEVICE struct page refcount

Message ID 20210813063150.2938-3-alex.sierra@amd.com (mailing list archive)
State New
Headers show
Series Support DEVICE_GENERIC memory in migrate_vma_* | expand

Commit Message

Sierra Guiza, Alejandro (Alex) Aug. 13, 2021, 6:31 a.m. UTC
From: Ralph Campbell <rcampbell@nvidia.com>

ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not being used (gup, compaction,
migration, etc.). Clean up the code so the reference count doesn't need to
be treated specially for ZONE_DEVICE.

v2:
AS: merged this patch in linux 5.11 version

v5:
AS: add condition at try_grab_page to check for the zone device type, while
page ref counter is checked less/equal to zero. In case of device zone, pages
ref counter are initialized to zero.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Alex Sierra <alex.sierra@amd.com>
---
 arch/powerpc/kvm/book3s_hv_uvmem.c     |  2 +-
 drivers/gpu/drm/nouveau/nouveau_dmem.c |  2 +-
 fs/dax.c                               |  4 +-
 include/linux/dax.h                    |  2 +-
 include/linux/memremap.h               |  7 +--
 include/linux/mm.h                     | 13 +----
 lib/test_hmm.c                         |  2 +-
 mm/internal.h                          |  8 +++
 mm/memremap.c                          | 68 +++++++-------------------
 mm/migrate.c                           |  5 --
 mm/page_alloc.c                        |  3 ++
 mm/swap.c                              | 45 ++---------------
 12 files changed, 46 insertions(+), 115 deletions(-)

Comments

Christoph Hellwig Aug. 15, 2021, 3:37 p.m. UTC | #1
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 8ae31622deef..d48a1f0889d1 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1218,7 +1218,7 @@ __maybe_unused struct page *try_grab_compound_head(struct page *page, int refs,
>  static inline __must_check bool try_get_page(struct page *page)
>  {
>  	page = compound_head(page);
> -	if (WARN_ON_ONCE(page_ref_count(page) <= 0))
> +	if (WARN_ON_ONCE(page_ref_count(page) < (int)!is_zone_device_page(page)))

Please avoid the overly long line.  In fact I'd be tempted to just not
bother here and keep the old, more lose check.  Especially given that
John has a patch ready that removes try_get_page entirely.
John Hubbard Aug. 15, 2021, 8:40 p.m. UTC | #2
On 8/15/21 8:37 AM, Christoph Hellwig wrote:
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 8ae31622deef..d48a1f0889d1 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1218,7 +1218,7 @@ __maybe_unused struct page *try_grab_compound_head(struct page *page, int refs,
>>   static inline __must_check bool try_get_page(struct page *page)
>>   {
>>   	page = compound_head(page);
>> -	if (WARN_ON_ONCE(page_ref_count(page) <= 0))
>> +	if (WARN_ON_ONCE(page_ref_count(page) < (int)!is_zone_device_page(page)))
> 
> Please avoid the overly long line.  In fact I'd be tempted to just not
> bother here and keep the old, more lose check.  Especially given that
> John has a patch ready that removes try_get_page entirely.
> 

Yes. Andrew has accepted it into mmotm.

Ralph's patch here was written well before my cleanup that removed
try_grab_page() [1]. But now that we're here, if you drop this hunk then
it will make merging easier, I think.


[1] https://lore.kernel.org/r/20210813044133.1536842-4-jhubbard@nvidia.com

thanks,
--
John Hubbard
NVIDIA
Felix Kuehling Aug. 16, 2021, 6:56 p.m. UTC | #3
Am 2021-08-15 um 4:40 p.m. schrieb John Hubbard:
> On 8/15/21 8:37 AM, Christoph Hellwig wrote:
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 8ae31622deef..d48a1f0889d1 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -1218,7 +1218,7 @@ __maybe_unused struct page
>>> *try_grab_compound_head(struct page *page, int refs,
>>>   static inline __must_check bool try_get_page(struct page *page)
>>>   {
>>>       page = compound_head(page);
>>> -    if (WARN_ON_ONCE(page_ref_count(page) <= 0))
>>> +    if (WARN_ON_ONCE(page_ref_count(page) <
>>> (int)!is_zone_device_page(page)))
>>
>> Please avoid the overly long line.  In fact I'd be tempted to just not
>> bother here and keep the old, more lose check.  Especially given that
>> John has a patch ready that removes try_get_page entirely.
>>
>
> Yes. Andrew has accepted it into mmotm.
>
> Ralph's patch here was written well before my cleanup that removed
> try_grab_page() [1]. But now that we're here, if you drop this hunk then
> it will make merging easier, I think.
>
>
> [1]
> https://lore.kernel.org/r/20210813044133.1536842-4-jhubbard@nvidia.com

Hi John,

Thanks for the pointer. We'll drop this hunk and add a statement to our
patch description to highlight the dependency on your patch.

Regards,
  Felix


>
> thanks,
> -- 
> John Hubbard
> NVIDIA
>
Ralph Campbell Aug. 18, 2021, 12:01 a.m. UTC | #4
On 8/12/21 11:31 PM, Alex Sierra wrote:
> From: Ralph Campbell <rcampbell@nvidia.com>
>
> ZONE_DEVICE struct pages have an extra reference count that complicates the
> code for put_page() and several places in the kernel that need to check the
> reference count to see that a page is not being used (gup, compaction,
> migration, etc.). Clean up the code so the reference count doesn't need to
> be treated specially for ZONE_DEVICE.
>
> v2:
> AS: merged this patch in linux 5.11 version
>
> v5:
> AS: add condition at try_grab_page to check for the zone device type, while
> page ref counter is checked less/equal to zero. In case of device zone, pages
> ref counter are initialized to zero.
>
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
> ---
>   arch/powerpc/kvm/book3s_hv_uvmem.c     |  2 +-
>   drivers/gpu/drm/nouveau/nouveau_dmem.c |  2 +-
>   fs/dax.c                               |  4 +-
>   include/linux/dax.h                    |  2 +-
>   include/linux/memremap.h               |  7 +--
>   include/linux/mm.h                     | 13 +----
>   lib/test_hmm.c                         |  2 +-
>   mm/internal.h                          |  8 +++
>   mm/memremap.c                          | 68 +++++++-------------------
>   mm/migrate.c                           |  5 --
>   mm/page_alloc.c                        |  3 ++
>   mm/swap.c                              | 45 ++---------------
>   12 files changed, 46 insertions(+), 115 deletions(-)
>
I haven't seen a response to the issues I raised back at v3 of this series.
https://lore.kernel.org/linux-mm/4f6dd918-d79b-1aa7-3a4c-caa67ddc29bc@nvidia.com/

Did I miss something?
Felix Kuehling Aug. 18, 2021, 12:35 a.m. UTC | #5
Am 2021-08-17 um 8:01 p.m. schrieb Ralph Campbell:
> On 8/12/21 11:31 PM, Alex Sierra wrote:
>> From: Ralph Campbell <rcampbell@nvidia.com>
>>
>> ZONE_DEVICE struct pages have an extra reference count that
>> complicates the
>> code for put_page() and several places in the kernel that need to
>> check the
>> reference count to see that a page is not being used (gup, compaction,
>> migration, etc.). Clean up the code so the reference count doesn't
>> need to
>> be treated specially for ZONE_DEVICE.
>>
>> v2:
>> AS: merged this patch in linux 5.11 version
>>
>> v5:
>> AS: add condition at try_grab_page to check for the zone device type,
>> while
>> page ref counter is checked less/equal to zero. In case of device
>> zone, pages
>> ref counter are initialized to zero.
>>
>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
>> ---
>>   arch/powerpc/kvm/book3s_hv_uvmem.c     |  2 +-
>>   drivers/gpu/drm/nouveau/nouveau_dmem.c |  2 +-
>>   fs/dax.c                               |  4 +-
>>   include/linux/dax.h                    |  2 +-
>>   include/linux/memremap.h               |  7 +--
>>   include/linux/mm.h                     | 13 +----
>>   lib/test_hmm.c                         |  2 +-
>>   mm/internal.h                          |  8 +++
>>   mm/memremap.c                          | 68 +++++++-------------------
>>   mm/migrate.c                           |  5 --
>>   mm/page_alloc.c                        |  3 ++
>>   mm/swap.c                              | 45 ++---------------
>>   12 files changed, 46 insertions(+), 115 deletions(-)
>>
> I haven't seen a response to the issues I raised back at v3 of this
> series.
> https://lore.kernel.org/linux-mm/4f6dd918-d79b-1aa7-3a4c-caa67ddc29bc@nvidia.com/
>
>
> Did I miss something?

I think part of the response was that we did more testing. Alex added
support for DEVICE_GENERIC pages to test_hmm and he ran DAX tests
recommended by Theodore Tso. In that testing he ran into a WARN_ON_ONCE
about a zero page refcount in try_get_page. The fix is in the latest
version of patch 2. But it's already obsolete because John Hubbard is
about to remove that function altogether.

I think the issues you raised were more uncertainty than known bugs. It
seems the fact that you can have DAX pages with 0 refcount is a feature
more than a bug.

Regards,
  Felix
Ralph Campbell Aug. 18, 2021, 7:28 p.m. UTC | #6
On 8/17/21 5:35 PM, Felix Kuehling wrote:
> Am 2021-08-17 um 8:01 p.m. schrieb Ralph Campbell:
>> On 8/12/21 11:31 PM, Alex Sierra wrote:
>>> From: Ralph Campbell <rcampbell@nvidia.com>
>>>
>>> ZONE_DEVICE struct pages have an extra reference count that
>>> complicates the
>>> code for put_page() and several places in the kernel that need to
>>> check the
>>> reference count to see that a page is not being used (gup, compaction,
>>> migration, etc.). Clean up the code so the reference count doesn't
>>> need to
>>> be treated specially for ZONE_DEVICE.
>>>
>>> v2:
>>> AS: merged this patch in linux 5.11 version
>>>
>>> v5:
>>> AS: add condition at try_grab_page to check for the zone device type,
>>> while
>>> page ref counter is checked less/equal to zero. In case of device
>>> zone, pages
>>> ref counter are initialized to zero.
>>>
>>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>>> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
>>> ---
>>>    arch/powerpc/kvm/book3s_hv_uvmem.c     |  2 +-
>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c |  2 +-
>>>    fs/dax.c                               |  4 +-
>>>    include/linux/dax.h                    |  2 +-
>>>    include/linux/memremap.h               |  7 +--
>>>    include/linux/mm.h                     | 13 +----
>>>    lib/test_hmm.c                         |  2 +-
>>>    mm/internal.h                          |  8 +++
>>>    mm/memremap.c                          | 68 +++++++-------------------
>>>    mm/migrate.c                           |  5 --
>>>    mm/page_alloc.c                        |  3 ++
>>>    mm/swap.c                              | 45 ++---------------
>>>    12 files changed, 46 insertions(+), 115 deletions(-)
>>>
>> I haven't seen a response to the issues I raised back at v3 of this
>> series.
>> https://lore.kernel.org/linux-mm/4f6dd918-d79b-1aa7-3a4c-caa67ddc29bc@nvidia.com/
>>
>>
>> Did I miss something?
> I think part of the response was that we did more testing. Alex added
> support for DEVICE_GENERIC pages to test_hmm and he ran DAX tests
> recommended by Theodore Tso. In that testing he ran into a WARN_ON_ONCE
> about a zero page refcount in try_get_page. The fix is in the latest
> version of patch 2. But it's already obsolete because John Hubbard is
> about to remove that function altogether.
>
> I think the issues you raised were more uncertainty than known bugs. It
> seems the fact that you can have DAX pages with 0 refcount is a feature
> more than a bug.
>
> Regards,
>    Felix

Did you test on a system without CONFIG_ARCH_HAS_PTE_SPECIAL defined?
In that case, mmap() of a DAX device will call insert_page() which calls
get_page() which would trigger VM_BUG_ON_PAGE().

I can believe it is OK for PTE_SPECIAL page table entries to have no
struct page or that MEMORY_DEVICE_GENERIC struct pages be mapped with
a zero reference count using insert_pfn().

I find it hard to believe that other MM developers don't see an issue
with a struct page with refcount == 0 and mapcount == 1.

I don't see where init_page_count() is being called for the
MEMORY_DEVICE_GENERIC or MEMORY_DEVICE_PRIVATE struct pages the AMD
driver allocates and passes to migrate_vma_setup().
Looks like svm_migrate_get_vram_page() needs to call init_page_count()
instead of get_page(). (I'm looking at branch origin/alexsierrag/device_generic
https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver.git)

Also, what about the other places where is_device_private_page() is called?
Don't they need to be updated to call is_device_page() instead?
One of my goals for this patch was to remove special casing reference counts
for ZONE_DEVICE pages in rmap.c, etc.

I still think this patch needs an ACK from a FS/DAX maintainer.
Sierra Guiza, Alejandro (Alex) Aug. 19, 2021, 6 p.m. UTC | #7
On 8/18/2021 2:28 PM, Ralph Campbell wrote:
> On 8/17/21 5:35 PM, Felix Kuehling wrote:
>> Am 2021-08-17 um 8:01 p.m. schrieb Ralph Campbell:
>>> On 8/12/21 11:31 PM, Alex Sierra wrote:
>>>> From: Ralph Campbell <rcampbell@nvidia.com>
>>>>
>>>> ZONE_DEVICE struct pages have an extra reference count that
>>>> complicates the
>>>> code for put_page() and several places in the kernel that need to
>>>> check the
>>>> reference count to see that a page is not being used (gup, compaction,
>>>> migration, etc.). Clean up the code so the reference count doesn't
>>>> need to
>>>> be treated specially for ZONE_DEVICE.
>>>>
>>>> v2:
>>>> AS: merged this patch in linux 5.11 version
>>>>
>>>> v5:
>>>> AS: add condition at try_grab_page to check for the zone device type,
>>>> while
>>>> page ref counter is checked less/equal to zero. In case of device
>>>> zone, pages
>>>> ref counter are initialized to zero.
>>>>
>>>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>>>> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
>>>> ---
>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c     |  2 +-
>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c |  2 +-
>>>>    fs/dax.c                               |  4 +-
>>>>    include/linux/dax.h                    |  2 +-
>>>>    include/linux/memremap.h               |  7 +--
>>>>    include/linux/mm.h                     | 13 +----
>>>>    lib/test_hmm.c                         |  2 +-
>>>>    mm/internal.h                          |  8 +++
>>>>    mm/memremap.c                          | 68 
>>>> +++++++-------------------
>>>>    mm/migrate.c                           |  5 --
>>>>    mm/page_alloc.c                        |  3 ++
>>>>    mm/swap.c                              | 45 ++---------------
>>>>    12 files changed, 46 insertions(+), 115 deletions(-)
>>>>
>>> I haven't seen a response to the issues I raised back at v3 of this
>>> series.
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-mm%2F4f6dd918-d79b-1aa7-3a4c-caa67ddc29bc%40nvidia.com%2F&amp;data=04%7C01%7Calex.sierra%40amd.com%7Cd2bd2d4fbf764528540908d9627e5dcd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637649117156919772%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=P7FxYm%2BkJaCkMFa3OHtuKrPOn7SvytFRmYQdIzq7rN4%3D&amp;reserved=0 
>>>
>>>
>>>
>>> Did I miss something?
>> I think part of the response was that we did more testing. Alex added
>> support for DEVICE_GENERIC pages to test_hmm and he ran DAX tests
>> recommended by Theodore Tso. In that testing he ran into a WARN_ON_ONCE
>> about a zero page refcount in try_get_page. The fix is in the latest
>> version of patch 2. But it's already obsolete because John Hubbard is
>> about to remove that function altogether.
>>
>> I think the issues you raised were more uncertainty than known bugs. It
>> seems the fact that you can have DAX pages with 0 refcount is a feature
>> more than a bug.
>>
>> Regards,
>>    Felix
>
> Did you test on a system without CONFIG_ARCH_HAS_PTE_SPECIAL defined?
> In that case, mmap() of a DAX device will call insert_page() which calls
> get_page() which would trigger VM_BUG_ON_PAGE().
>
> I can believe it is OK for PTE_SPECIAL page table entries to have no
> struct page or that MEMORY_DEVICE_GENERIC struct pages be mapped with
> a zero reference count using insert_pfn().
Hi Ralph,
We have tried the DAX tests with and without CONFIG_ARCH_HAS_PTE_SPECIAL 
defined.
Apparently none of the tests touches that condition for a DAX device. Of 
course,
that doesn't mean it could happen.

Regards,
Alex S.

>
>
> I find it hard to believe that other MM developers don't see an issue
> with a struct page with refcount == 0 and mapcount == 1.
>
> I don't see where init_page_count() is being called for the
> MEMORY_DEVICE_GENERIC or MEMORY_DEVICE_PRIVATE struct pages the AMD
> driver allocates and passes to migrate_vma_setup().
> Looks like svm_migrate_get_vram_page() needs to call init_page_count()
> instead of get_page(). (I'm looking at branch 
> origin/alexsierrag/device_generic
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FRadeonOpenCompute%2FROCK-Kernel-Driver.git&amp;data=04%7C01%7Calex.sierra%40amd.com%7Cd2bd2d4fbf764528540908d9627e5dcd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637649117156919772%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=IXe8HP2s8x5OdJdERBkGOYJCQk3iqCu5AYkwpDL8zec%3D&amp;reserved=0)
Yes, you're right. My bad. Thanks for catching this up. I didn't realize 
I was missing
to define CONFIG_DEBUG_VM on my build. Therefore this BUG was never caught.
It worked after I replaced get_pages by init_page_count at
svm_migrate_get_vram_page. However, I don't think this is the best way 
to fix it.
Ideally, get_pages call should work for device pages with ref count 
equal to 0
too. Otherwise, we could overwrite refcounter if someone else is 
grabbing the page
concurrently.
I was thinking to add a special condition in get_pages for dev pages. 
This could
also fix the insert_page -> get_page call from a DAX device.

Regards,
Alex S.
>
>
> Also, what about the other places where is_device_private_page() is 
> called?
> Don't they need to be updated to call is_device_page() instead?
> One of my goals for this patch was to remove special casing reference 
> counts
> for ZONE_DEVICE pages in rmap.c, etc.
Correct, is_device_private_page is still used in rmap, memcontrol and 
migrate.c files
Looks like rmap and memcontrol should be replaced by is_device_page 
function. However,
I still need test to validate this. For migrate.c is used in 
remove_migration_pte and
migrate_vma_insert_page, however these are specific conditions for 
private device type
Thanks for raise these questions, I think we're getting close.

Regards,
Alex S.
>
> I still think this patch needs an ACK from a FS/DAX maintainer.
>
Felix Kuehling Aug. 19, 2021, 7:59 p.m. UTC | #8
Am 2021-08-19 um 2:00 p.m. schrieb Sierra Guiza, Alejandro (Alex):
>
> On 8/18/2021 2:28 PM, Ralph Campbell wrote:
>> On 8/17/21 5:35 PM, Felix Kuehling wrote:
>>> Am 2021-08-17 um 8:01 p.m. schrieb Ralph Campbell:
>>>> On 8/12/21 11:31 PM, Alex Sierra wrote:
>>>>> From: Ralph Campbell <rcampbell@nvidia.com>
>>>>>
>>>>> ZONE_DEVICE struct pages have an extra reference count that
>>>>> complicates the
>>>>> code for put_page() and several places in the kernel that need to
>>>>> check the
>>>>> reference count to see that a page is not being used (gup,
>>>>> compaction,
>>>>> migration, etc.). Clean up the code so the reference count doesn't
>>>>> need to
>>>>> be treated specially for ZONE_DEVICE.
>>>>>
>>>>> v2:
>>>>> AS: merged this patch in linux 5.11 version
>>>>>
>>>>> v5:
>>>>> AS: add condition at try_grab_page to check for the zone device type,
>>>>> while
>>>>> page ref counter is checked less/equal to zero. In case of device
>>>>> zone, pages
>>>>> ref counter are initialized to zero.
>>>>>
>>>>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>>>>> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
>>>>> ---
>>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c     |  2 +-
>>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c |  2 +-
>>>>>    fs/dax.c                               |  4 +-
>>>>>    include/linux/dax.h                    |  2 +-
>>>>>    include/linux/memremap.h               |  7 +--
>>>>>    include/linux/mm.h                     | 13 +----
>>>>>    lib/test_hmm.c                         |  2 +-
>>>>>    mm/internal.h                          |  8 +++
>>>>>    mm/memremap.c                          | 68
>>>>> +++++++-------------------
>>>>>    mm/migrate.c                           |  5 --
>>>>>    mm/page_alloc.c                        |  3 ++
>>>>>    mm/swap.c                              | 45 ++---------------
>>>>>    12 files changed, 46 insertions(+), 115 deletions(-)
>>>>>
>>>> I haven't seen a response to the issues I raised back at v3 of this
>>>> series.
>>>> https://lore.kernel.org/linux-mm/4f6dd918-d79b-1aa7-3a4c-caa67ddc29bc@nvidia.com/
>>>>
>>>>
>>>>
>>>> Did I miss something?
>>> I think part of the response was that we did more testing. Alex added
>>> support for DEVICE_GENERIC pages to test_hmm and he ran DAX tests
>>> recommended by Theodore Tso. In that testing he ran into a WARN_ON_ONCE
>>> about a zero page refcount in try_get_page. The fix is in the latest
>>> version of patch 2. But it's already obsolete because John Hubbard is
>>> about to remove that function altogether.
>>>
>>> I think the issues you raised were more uncertainty than known bugs. It
>>> seems the fact that you can have DAX pages with 0 refcount is a feature
>>> more than a bug.
>>>
>>> Regards,
>>>    Felix
>>
>> Did you test on a system without CONFIG_ARCH_HAS_PTE_SPECIAL defined?
>> In that case, mmap() of a DAX device will call insert_page() which calls
>> get_page() which would trigger VM_BUG_ON_PAGE().
>>
>> I can believe it is OK for PTE_SPECIAL page table entries to have no
>> struct page or that MEMORY_DEVICE_GENERIC struct pages be mapped with
>> a zero reference count using insert_pfn().
> Hi Ralph,
> We have tried the DAX tests with and without
> CONFIG_ARCH_HAS_PTE_SPECIAL defined.
> Apparently none of the tests touches that condition for a DAX device.
> Of course,
> that doesn't mean it could happen.
>
> Regards,
> Alex S.
>
>>
>>
>> I find it hard to believe that other MM developers don't see an issue
>> with a struct page with refcount == 0 and mapcount == 1.
>>
>> I don't see where init_page_count() is being called for the
>> MEMORY_DEVICE_GENERIC or MEMORY_DEVICE_PRIVATE struct pages the AMD
>> driver allocates and passes to migrate_vma_setup().
>> Looks like svm_migrate_get_vram_page() needs to call init_page_count()
>> instead of get_page(). (I'm looking at branch
>> origin/alexsierrag/device_generic
>> https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver.git
> Yes, you're right. My bad. Thanks for catching this up. I didn't
> realize I was missing
> to define CONFIG_DEBUG_VM on my build. Therefore this BUG was never
> caught.
> It worked after I replaced get_pages by init_page_count at
> svm_migrate_get_vram_page. However, I don't think this is the best way
> to fix it.
> Ideally, get_pages call should work for device pages with ref count
> equal to 0
> too. Otherwise, we could overwrite refcounter if someone else is
> grabbing the page
> concurrently.

I think using init_page_count in svm_migrate_get_vram_page is the right
answer. This is where the page first gets allocated and initialized
(data migrated into it). I think nobody should have or try to take a
reference to the page before that. We should probably also add a
VM_BUG_ON_PAGE(page_ref_count(page) != 0) before calling init_page_count
to make sure of that.


> I was thinking to add a special condition in get_pages for dev pages.
> This could
> also fix the insert_page -> get_page call from a DAX device.

[+Theodore]

I got lost trying to understand how DAX counts page references and how
the PTE_SPECIAL option affects that. Theodore, can you help with this?
Is there an easy way to test without CONFIG_ARCH_HAS_PTE_SPECIAL on x86,
or do we need to test on a CPU architecture that doesn't support this
feature?

Thanks,
  Felix


>
> Regards,
> Alex S.
>>
>>
>> Also, what about the other places where is_device_private_page() is
>> called?
>> Don't they need to be updated to call is_device_page() instead?
>> One of my goals for this patch was to remove special casing reference
>> counts
>> for ZONE_DEVICE pages in rmap.c, etc.
> Correct, is_device_private_page is still used in rmap, memcontrol and
> migrate.c files
> Looks like rmap and memcontrol should be replaced by is_device_page
> function. However,
> I still need test to validate this. For migrate.c is used in
> remove_migration_pte and
> migrate_vma_insert_page, however these are specific conditions for
> private device type
> Thanks for raise these questions, I think we're getting close.
>
> Regards,
> Alex S.
>>
>> I still think this patch needs an ACK from a FS/DAX maintainer.
>>
Christoph Hellwig Aug. 20, 2021, 4:40 a.m. UTC | #9
On Thu, Aug 19, 2021 at 03:59:56PM -0400, Felix Kuehling wrote:
> I got lost trying to understand how DAX counts page references and how
> the PTE_SPECIAL option affects that. Theodore, can you help with this?
> Is there an easy way to test without CONFIG_ARCH_HAS_PTE_SPECIAL on x86,
> or do we need to test on a CPU architecture that doesn't support this
> feature?

I think the right answer is to simplify disallow ZONE_DEVICE pages
if ARCH_HAS_PTE_SPECIAL is not supported.  ARCH_HAS_PTE_SPECIAL is
supported by all modern architecture ports than can make use of
ZONE_DEVICE / dev_pagemap, so we can avoid this pocket of barely
testable code entirely:


diff --git a/mm/Kconfig b/mm/Kconfig
index 40a9bfcd5062e1..2823bbfd1c8c70 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -775,6 +775,7 @@ config ZONE_DMA32
 
 config ZONE_DEVICE
 	bool "Device memory (pmem, HMM, etc...) hotplug support"
+	depends on ARCH_HAS_PTE_SPECIAL
 	depends on MEMORY_HOTPLUG
 	depends on MEMORY_HOTREMOVE
 	depends on SPARSEMEM_VMEMMAP
Christoph Hellwig Aug. 20, 2021, 4:56 a.m. UTC | #10
On Wed, Aug 18, 2021 at 12:28:30PM -0700, Ralph Campbell wrote:
> Did you test on a system without CONFIG_ARCH_HAS_PTE_SPECIAL defined?
> In that case, mmap() of a DAX device will call insert_page() which calls
> get_page() which would trigger VM_BUG_ON_PAGE().

__vm_insert_mixed still ends up calling insert_pfn for the
!CASE_ARCH_HAS_PTE_SPECIAL if pfn_t_devmap() is true, which it should
be for DAX.  (and as said in my other mail, I suspect we should disallow
that case anyway, as no one can test it in practice).
Jerome Glisse Aug. 20, 2021, 6:33 a.m. UTC | #11
Note that you do not want GUP to succeed on device page, i do not see
where that is handled in the new code.

On Sun, Aug 15, 2021 at 1:40 PM John Hubbard <jhubbard@nvidia.com> wrote:
>
> On 8/15/21 8:37 AM, Christoph Hellwig wrote:
> >> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >> index 8ae31622deef..d48a1f0889d1 100644
> >> --- a/include/linux/mm.h
> >> +++ b/include/linux/mm.h
> >> @@ -1218,7 +1218,7 @@ __maybe_unused struct page *try_grab_compound_head(struct page *page, int refs,
> >>   static inline __must_check bool try_get_page(struct page *page)
> >>   {
> >>      page = compound_head(page);
> >> -    if (WARN_ON_ONCE(page_ref_count(page) <= 0))
> >> +    if (WARN_ON_ONCE(page_ref_count(page) < (int)!is_zone_device_page(page)))
> >
> > Please avoid the overly long line.  In fact I'd be tempted to just not
> > bother here and keep the old, more lose check.  Especially given that
> > John has a patch ready that removes try_get_page entirely.
> >
>
> Yes. Andrew has accepted it into mmotm.
>
> Ralph's patch here was written well before my cleanup that removed
> try_grab_page() [1]. But now that we're here, if you drop this hunk then
> it will make merging easier, I think.
>
>
> [1] https://lore.kernel.org/r/20210813044133.1536842-4-jhubbard@nvidia.com
>
> thanks,
> --
> John Hubbard
> NVIDIA
>
Jerome Glisse Aug. 20, 2021, 7:17 a.m. UTC | #12
On Thu, Aug 19, 2021 at 11:00 AM Sierra Guiza, Alejandro (Alex)
<alex.sierra@amd.com> wrote:
>
>
> On 8/18/2021 2:28 PM, Ralph Campbell wrote:
> > On 8/17/21 5:35 PM, Felix Kuehling wrote:
> >> Am 2021-08-17 um 8:01 p.m. schrieb Ralph Campbell:
> >>> On 8/12/21 11:31 PM, Alex Sierra wrote:
> >>>> From: Ralph Campbell <rcampbell@nvidia.com>
> >>>>
> >>>> ZONE_DEVICE struct pages have an extra reference count that
> >>>> complicates the
> >>>> code for put_page() and several places in the kernel that need to
> >>>> check the
> >>>> reference count to see that a page is not being used (gup, compaction,
> >>>> migration, etc.). Clean up the code so the reference count doesn't
> >>>> need to
> >>>> be treated specially for ZONE_DEVICE.
> >>>>
> >>>> v2:
> >>>> AS: merged this patch in linux 5.11 version
> >>>>
> >>>> v5:
> >>>> AS: add condition at try_grab_page to check for the zone device type,
> >>>> while
> >>>> page ref counter is checked less/equal to zero. In case of device
> >>>> zone, pages
> >>>> ref counter are initialized to zero.
> >>>>
> >>>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> >>>> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
> >>>> ---
> >>>>    arch/powerpc/kvm/book3s_hv_uvmem.c     |  2 +-
> >>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c |  2 +-
> >>>>    fs/dax.c                               |  4 +-
> >>>>    include/linux/dax.h                    |  2 +-
> >>>>    include/linux/memremap.h               |  7 +--
> >>>>    include/linux/mm.h                     | 13 +----
> >>>>    lib/test_hmm.c                         |  2 +-
> >>>>    mm/internal.h                          |  8 +++
> >>>>    mm/memremap.c                          | 68
> >>>> +++++++-------------------
> >>>>    mm/migrate.c                           |  5 --
> >>>>    mm/page_alloc.c                        |  3 ++
> >>>>    mm/swap.c                              | 45 ++---------------
> >>>>    12 files changed, 46 insertions(+), 115 deletions(-)
> >>>>
> >>> I haven't seen a response to the issues I raised back at v3 of this
> >>> series.
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-mm%2F4f6dd918-d79b-1aa7-3a4c-caa67ddc29bc%40nvidia.com%2F&amp;data=04%7C01%7Calex.sierra%40amd.com%7Cd2bd2d4fbf764528540908d9627e5dcd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637649117156919772%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=P7FxYm%2BkJaCkMFa3OHtuKrPOn7SvytFRmYQdIzq7rN4%3D&amp;reserved=0
> >>>
> >>>
> >>>
> >>> Did I miss something?
> >> I think part of the response was that we did more testing. Alex added
> >> support for DEVICE_GENERIC pages to test_hmm and he ran DAX tests
> >> recommended by Theodore Tso. In that testing he ran into a WARN_ON_ONCE
> >> about a zero page refcount in try_get_page. The fix is in the latest
> >> version of patch 2. But it's already obsolete because John Hubbard is
> >> about to remove that function altogether.
> >>
> >> I think the issues you raised were more uncertainty than known bugs. It
> >> seems the fact that you can have DAX pages with 0 refcount is a feature
> >> more than a bug.
> >>
> >> Regards,
> >>    Felix
> >
> > Did you test on a system without CONFIG_ARCH_HAS_PTE_SPECIAL defined?
> > In that case, mmap() of a DAX device will call insert_page() which calls
> > get_page() which would trigger VM_BUG_ON_PAGE().
> >
> > I can believe it is OK for PTE_SPECIAL page table entries to have no
> > struct page or that MEMORY_DEVICE_GENERIC struct pages be mapped with
> > a zero reference count using insert_pfn().
> Hi Ralph,
> We have tried the DAX tests with and without CONFIG_ARCH_HAS_PTE_SPECIAL
> defined.
> Apparently none of the tests touches that condition for a DAX device. Of
> course,
> that doesn't mean it could happen.
>
> Regards,
> Alex S.
>
> >
> >
> > I find it hard to believe that other MM developers don't see an issue
> > with a struct page with refcount == 0 and mapcount == 1.
> >
> > I don't see where init_page_count() is being called for the
> > MEMORY_DEVICE_GENERIC or MEMORY_DEVICE_PRIVATE struct pages the AMD
> > driver allocates and passes to migrate_vma_setup().
> > Looks like svm_migrate_get_vram_page() needs to call init_page_count()
> > instead of get_page(). (I'm looking at branch
> > origin/alexsierrag/device_generic
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FRadeonOpenCompute%2FROCK-Kernel-Driver.git&amp;data=04%7C01%7Calex.sierra%40amd.com%7Cd2bd2d4fbf764528540908d9627e5dcd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637649117156919772%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=IXe8HP2s8x5OdJdERBkGOYJCQk3iqCu5AYkwpDL8zec%3D&amp;reserved=0)
> Yes, you're right. My bad. Thanks for catching this up. I didn't realize
> I was missing
> to define CONFIG_DEBUG_VM on my build. Therefore this BUG was never caught.
> It worked after I replaced get_pages by init_page_count at
> svm_migrate_get_vram_page. However, I don't think this is the best way
> to fix it.

You definitly don't want to do that. reiniting the page refcounter is
wrong it should be done once. Nouveau is not a good example here.

> Ideally, get_pages call should work for device pages with ref count
> equal to 0
> too. Otherwise, we could overwrite refcounter if someone else is
> grabbing the page
> concurrently.
> I was thinking to add a special condition in get_pages for dev pages.
> This could
> also fix the insert_page -> get_page call from a DAX device.

What is the issue here exactly ?

>
> Regards,
> Alex S.
> >
> >
> > Also, what about the other places where is_device_private_page() is
> > called?
> > Don't they need to be updated to call is_device_page() instead?
> > One of my goals for this patch was to remove special casing reference
> > counts
> > for ZONE_DEVICE pages in rmap.c, etc.
> Correct, is_device_private_page is still used in rmap, memcontrol and
> migrate.c files
> Looks like rmap and memcontrol should be replaced by is_device_page
> function.

No you do not want to do that. The private case is special case for a reason.

Jerome
diff mbox series

Patch

diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 84e5a2dc8be5..acee67710620 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -711,7 +711,7 @@  static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
 
 	dpage = pfn_to_page(uvmem_pfn);
 	dpage->zone_device_data = pvt;
-	get_page(dpage);
+	init_page_count(dpage);
 	lock_page(dpage);
 	return dpage;
 out_clear:
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 92987daa5e17..8bc7120e1216 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -324,7 +324,7 @@  nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm)
 			return NULL;
 	}
 
-	get_page(page);
+	init_page_count(page);
 	lock_page(page);
 	return page;
 }
diff --git a/fs/dax.c b/fs/dax.c
index c387d09e3e5a..1166630b7190 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -571,14 +571,14 @@  static void *grab_mapping_entry(struct xa_state *xas,
 
 /**
  * dax_layout_busy_page_range - find first pinned page in @mapping
- * @mapping: address space to scan for a page with ref count > 1
+ * @mapping: address space to scan for a page with ref count > 0
  * @start: Starting offset. Page containing 'start' is included.
  * @end: End offset. Page containing 'end' is included. If 'end' is LLONG_MAX,
  *       pages from 'start' till the end of file are included.
  *
  * DAX requires ZONE_DEVICE mapped pages. These pages are never
  * 'onlined' to the page allocator so they are considered idle when
- * page->count == 1. A filesystem uses this interface to determine if
+ * page->count == 0. A filesystem uses this interface to determine if
  * any page in the mapping is busy, i.e. for DMA, or other
  * get_user_pages() usages.
  *
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 8b5da1d60dbc..05fc982ce153 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -245,7 +245,7 @@  static inline bool dax_mapping(struct address_space *mapping)
 
 static inline bool dax_page_unused(struct page *page)
 {
-	return page_ref_count(page) == 1;
+	return page_ref_count(page) == 0;
 }
 
 #define dax_wait_page(_inode, _page, _wait_cb)				\
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 45a79da89c5f..77ff5fd0685f 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -66,9 +66,10 @@  enum memory_type {
 
 struct dev_pagemap_ops {
 	/*
-	 * Called once the page refcount reaches 1.  (ZONE_DEVICE pages never
-	 * reach 0 refcount unless there is a refcount bug. This allows the
-	 * device driver to implement its own memory management.)
+	 * Called once the page refcount reaches 0. The reference count
+	 * should be reset to one with init_page_count(page) before reusing
+	 * the page. This allows the device driver to implement its own
+	 * memory management.
 	 */
 	void (*page_free)(struct page *page);
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8ae31622deef..d48a1f0889d1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1218,7 +1218,7 @@  __maybe_unused struct page *try_grab_compound_head(struct page *page, int refs,
 static inline __must_check bool try_get_page(struct page *page)
 {
 	page = compound_head(page);
-	if (WARN_ON_ONCE(page_ref_count(page) <= 0))
+	if (WARN_ON_ONCE(page_ref_count(page) < (int)!is_zone_device_page(page)))
 		return false;
 	page_ref_inc(page);
 	return true;
@@ -1228,17 +1228,6 @@  static inline void put_page(struct page *page)
 {
 	page = compound_head(page);
 
-	/*
-	 * For devmap managed pages we need to catch refcount transition from
-	 * 2 to 1, when refcount reach one it means the page is free and we
-	 * need to inform the device driver through callback. See
-	 * include/linux/memremap.h and HMM for details.
-	 */
-	if (page_is_devmap_managed(page)) {
-		put_devmap_managed_page(page);
-		return;
-	}
-
 	if (put_page_testzero(page))
 		__put_page(page);
 }
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 80a78877bd93..6998f10350ea 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -561,7 +561,7 @@  static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice)
 	}
 
 	dpage->zone_device_data = rpage;
-	get_page(dpage);
+	init_page_count(dpage);
 	lock_page(dpage);
 	return dpage;
 
diff --git a/mm/internal.h b/mm/internal.h
index e8fdb531f887..5438cceca4b9 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -667,4 +667,12 @@  int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 
 void vunmap_range_noflush(unsigned long start, unsigned long end);
 
+#ifdef CONFIG_DEV_PAGEMAP_OPS
+void free_zone_device_page(struct page *page);
+#else
+static inline void free_zone_device_page(struct page *page)
+{
+}
+#endif
+
 #endif	/* __MM_INTERNAL_H */
diff --git a/mm/memremap.c b/mm/memremap.c
index 15a074ffb8d7..5aa8163fd948 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -12,6 +12,7 @@ 
 #include <linux/types.h>
 #include <linux/wait_bit.h>
 #include <linux/xarray.h>
+#include "internal.h"
 
 static DEFINE_XARRAY(pgmap_array);
 
@@ -37,32 +38,6 @@  unsigned long memremap_compat_align(void)
 EXPORT_SYMBOL_GPL(memremap_compat_align);
 #endif
 
-#ifdef CONFIG_DEV_PAGEMAP_OPS
-DEFINE_STATIC_KEY_FALSE(devmap_managed_key);
-EXPORT_SYMBOL(devmap_managed_key);
-
-static void devmap_managed_enable_put(struct dev_pagemap *pgmap)
-{
-	if (pgmap->type == MEMORY_DEVICE_PRIVATE ||
-	    pgmap->type == MEMORY_DEVICE_FS_DAX)
-		static_branch_dec(&devmap_managed_key);
-}
-
-static void devmap_managed_enable_get(struct dev_pagemap *pgmap)
-{
-	if (pgmap->type == MEMORY_DEVICE_PRIVATE ||
-	    pgmap->type == MEMORY_DEVICE_FS_DAX)
-		static_branch_inc(&devmap_managed_key);
-}
-#else
-static void devmap_managed_enable_get(struct dev_pagemap *pgmap)
-{
-}
-static void devmap_managed_enable_put(struct dev_pagemap *pgmap)
-{
-}
-#endif /* CONFIG_DEV_PAGEMAP_OPS */
-
 static void pgmap_array_delete(struct range *range)
 {
 	xa_store_range(&pgmap_array, PHYS_PFN(range->start), PHYS_PFN(range->end),
@@ -102,16 +77,6 @@  static unsigned long pfn_end(struct dev_pagemap *pgmap, int range_id)
 	return (range->start + range_len(range)) >> PAGE_SHIFT;
 }
 
-static unsigned long pfn_next(unsigned long pfn)
-{
-	if (pfn % 1024 == 0)
-		cond_resched();
-	return pfn + 1;
-}
-
-#define for_each_device_pfn(pfn, map, i) \
-	for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = pfn_next(pfn))
-
 static void dev_pagemap_kill(struct dev_pagemap *pgmap)
 {
 	if (pgmap->ops && pgmap->ops->kill)
@@ -167,20 +132,18 @@  static void pageunmap_range(struct dev_pagemap *pgmap, int range_id)
 
 void memunmap_pages(struct dev_pagemap *pgmap)
 {
-	unsigned long pfn;
 	int i;
 
 	dev_pagemap_kill(pgmap);
 	for (i = 0; i < pgmap->nr_range; i++)
-		for_each_device_pfn(pfn, pgmap, i)
-			put_page(pfn_to_page(pfn));
+		percpu_ref_put_many(pgmap->ref, pfn_end(pgmap, i) -
+						pfn_first(pgmap, i));
 	dev_pagemap_cleanup(pgmap);
 
 	for (i = 0; i < pgmap->nr_range; i++)
 		pageunmap_range(pgmap, i);
 
 	WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
-	devmap_managed_enable_put(pgmap);
 }
 EXPORT_SYMBOL_GPL(memunmap_pages);
 
@@ -382,8 +345,6 @@  void *memremap_pages(struct dev_pagemap *pgmap, int nid)
 		}
 	}
 
-	devmap_managed_enable_get(pgmap);
-
 	/*
 	 * Clear the pgmap nr_range as it will be incremented for each
 	 * successfully processed range. This communicates how many
@@ -498,16 +459,10 @@  struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 EXPORT_SYMBOL_GPL(get_dev_pagemap);
 
 #ifdef CONFIG_DEV_PAGEMAP_OPS
-void free_devmap_managed_page(struct page *page)
+static void free_device_private_page(struct page *page)
 {
-	/* notify page idle for dax */
-	if (!is_device_private_page(page)) {
-		wake_up_var(&page->_refcount);
-		return;
-	}
 
 	__ClearPageWaiters(page);
-
 	mem_cgroup_uncharge(page);
 
 	/*
@@ -534,4 +489,19 @@  void free_devmap_managed_page(struct page *page)
 	page->mapping = NULL;
 	page->pgmap->ops->page_free(page);
 }
+
+void free_zone_device_page(struct page *page)
+{
+	switch (page->pgmap->type) {
+	case MEMORY_DEVICE_FS_DAX:
+		/* notify page idle */
+		wake_up_var(&page->_refcount);
+		return;
+	case MEMORY_DEVICE_PRIVATE:
+		free_device_private_page(page);
+		return;
+	default:
+		return;
+	}
+}
 #endif /* CONFIG_DEV_PAGEMAP_OPS */
diff --git a/mm/migrate.c b/mm/migrate.c
index 41ff2c9896c4..e3a10e2a1bb3 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -350,11 +350,6 @@  static int expected_page_refs(struct address_space *mapping, struct page *page)
 {
 	int expected_count = 1;
 
-	/*
-	 * Device private pages have an extra refcount as they are
-	 * ZONE_DEVICE pages.
-	 */
-	expected_count += is_device_private_page(page);
 	if (mapping)
 		expected_count += thp_nr_pages(page) + page_has_private(page);
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ef2265f86b91..1ef1f733af5b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6414,6 +6414,9 @@  void __ref memmap_init_zone_device(struct zone *zone,
 
 		__init_single_page(page, pfn, zone_idx, nid);
 
+		/* ZONE_DEVICE pages start with a zero reference count. */
+		set_page_count(page, 0);
+
 		/*
 		 * Mark page reserved as it will need to wait for onlining
 		 * phase for it to be fully associated with a zone.
diff --git a/mm/swap.c b/mm/swap.c
index dfb48cf9c2c9..9e821f1951c5 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -114,12 +114,11 @@  static void __put_compound_page(struct page *page)
 void __put_page(struct page *page)
 {
 	if (is_zone_device_page(page)) {
-		put_dev_pagemap(page->pgmap);
-
 		/*
 		 * The page belongs to the device that created pgmap. Do
 		 * not return it to page allocator.
 		 */
+		free_zone_device_page(page);
 		return;
 	}
 
@@ -917,29 +916,18 @@  void release_pages(struct page **pages, int nr)
 		if (is_huge_zero_page(page))
 			continue;
 
+		if (!put_page_testzero(page))
+			continue;
+
 		if (is_zone_device_page(page)) {
 			if (lruvec) {
 				unlock_page_lruvec_irqrestore(lruvec, flags);
 				lruvec = NULL;
 			}
-			/*
-			 * ZONE_DEVICE pages that return 'false' from
-			 * page_is_devmap_managed() do not require special
-			 * processing, and instead, expect a call to
-			 * put_page_testzero().
-			 */
-			if (page_is_devmap_managed(page)) {
-				put_devmap_managed_page(page);
-				continue;
-			}
-			if (put_page_testzero(page))
-				put_dev_pagemap(page->pgmap);
+			free_zone_device_page(page);
 			continue;
 		}
 
-		if (!put_page_testzero(page))
-			continue;
-
 		if (PageCompound(page)) {
 			if (lruvec) {
 				unlock_page_lruvec_irqrestore(lruvec, flags);
@@ -1143,26 +1131,3 @@  void __init swap_setup(void)
 	 * _really_ don't want to cluster much more
 	 */
 }
-
-#ifdef CONFIG_DEV_PAGEMAP_OPS
-void put_devmap_managed_page(struct page *page)
-{
-	int count;
-
-	if (WARN_ON_ONCE(!page_is_devmap_managed(page)))
-		return;
-
-	count = page_ref_dec_return(page);
-
-	/*
-	 * devmap page refcounts are 1-based, rather than 0-based: if
-	 * refcount is 1, then the page is free and the refcount is
-	 * stable because nobody holds a reference on the page.
-	 */
-	if (count == 1)
-		free_devmap_managed_page(page);
-	else if (!count)
-		__put_page(page);
-}
-EXPORT_SYMBOL(put_devmap_managed_page);
-#endif