diff mbox series

[3/9] mm/z3fold: remove buggy use of stale list for allocation

Message ID 20220429064051.61552-4-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few fixup patches for z3fold | expand

Commit Message

Miaohe Lin April 29, 2022, 6:40 a.m. UTC
Currently if z3fold couldn't find an unbuddied page it would first try to
pull a page off the stale list. But this approach is problematic. If init
z3fold page fails later, the page should be freed via free_z3fold_page to
clean up the relevant resource instead of using __free_page directly. And
if page is successfully reused, it will BUG_ON later in __SetPageMovable
because it's already non-lru movable page, i.e. PAGE_MAPPING_MOVABLE is
already set in page->mapping. In order to fix all of these issues, we can
simply remove the buggy use of stale list for allocation because can_sleep
should always be false and we never really hit the reusing code path now.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

Comments

Vitaly Wool May 19, 2022, 7:06 a.m. UTC | #1
On Fri, Apr 29, 2022 at 8:41 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Currently if z3fold couldn't find an unbuddied page it would first try to
> pull a page off the stale list. But this approach is problematic. If init
> z3fold page fails later, the page should be freed via free_z3fold_page to
> clean up the relevant resource instead of using __free_page directly. And
> if page is successfully reused, it will BUG_ON later in __SetPageMovable
> because it's already non-lru movable page, i.e. PAGE_MAPPING_MOVABLE is
> already set in page->mapping. In order to fix all of these issues, we can
> simply remove the buggy use of stale list for allocation because can_sleep
> should always be false and we never really hit the reusing code path now.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 23 +----------------------
>  1 file changed, 1 insertion(+), 22 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 5d8c21f2bc59..4e6814c5694f 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -1102,28 +1102,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
>                 bud = FIRST;
>         }
>
> -       page = NULL;
> -       if (can_sleep) {
> -               spin_lock(&pool->stale_lock);
> -               zhdr = list_first_entry_or_null(&pool->stale,
> -                                               struct z3fold_header, buddy);
> -               /*
> -                * Before allocating a page, let's see if we can take one from
> -                * the stale pages list. cancel_work_sync() can sleep so we
> -                * limit this case to the contexts where we can sleep
> -                */
> -               if (zhdr) {
> -                       list_del(&zhdr->buddy);
> -                       spin_unlock(&pool->stale_lock);
> -                       cancel_work_sync(&zhdr->work);
> -                       page = virt_to_page(zhdr);
> -               } else {
> -                       spin_unlock(&pool->stale_lock);
> -               }
> -       }
> -       if (!page)
> -               page = alloc_page(gfp);
> -
> +       page = alloc_page(gfp);
>         if (!page)
>                 return -ENOMEM;

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
> --
> 2.23.0
>
diff mbox series

Patch

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 5d8c21f2bc59..4e6814c5694f 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1102,28 +1102,7 @@  static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
 		bud = FIRST;
 	}
 
-	page = NULL;
-	if (can_sleep) {
-		spin_lock(&pool->stale_lock);
-		zhdr = list_first_entry_or_null(&pool->stale,
-						struct z3fold_header, buddy);
-		/*
-		 * Before allocating a page, let's see if we can take one from
-		 * the stale pages list. cancel_work_sync() can sleep so we
-		 * limit this case to the contexts where we can sleep
-		 */
-		if (zhdr) {
-			list_del(&zhdr->buddy);
-			spin_unlock(&pool->stale_lock);
-			cancel_work_sync(&zhdr->work);
-			page = virt_to_page(zhdr);
-		} else {
-			spin_unlock(&pool->stale_lock);
-		}
-	}
-	if (!page)
-		page = alloc_page(gfp);
-
+	page = alloc_page(gfp);
 	if (!page)
 		return -ENOMEM;