From patchwork Thu May 12 20:41:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12848099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5AFEC433F5 for ; Thu, 12 May 2022 20:41:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 444C16B0075; Thu, 12 May 2022 16:41:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F4336B0078; Thu, 12 May 2022 16:41:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BCB06B007B; Thu, 12 May 2022 16:41:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1E8516B0075 for ; Thu, 12 May 2022 16:41:49 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id EB9CA32A7F for ; Thu, 12 May 2022 20:41:48 +0000 (UTC) X-FDA: 79458262296.30.540A91C Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf05.hostedemail.com (Postfix) with ESMTP id 71EDC1000B1 for ; Thu, 12 May 2022 20:41:29 +0000 (UTC) Received: by mail-pf1-f171.google.com with SMTP id c14so5868455pfn.2 for ; Thu, 12 May 2022 13:41:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=uUZ1B/7xqCMbgStyA7Ho3ErPD4yvMkTwqgpBaMdYV0U=; b=itmq0KKZk0ljNSSnvPSn0aOhIoNlC18SbogAnFyZ5XL5esAH48sAtb9WYMZ7EbhpwN wNbS5ZZ9fwGNt8vEWxxcgzQqt3VTjbPH11NwxZ6Tq1XwLRkRJjeTLmkFkrBkBUop9Wnh JM0ksbxepebVzGOJdTrcvP6Vk/2t7NC6AFgd+Y3pqcZaLcFjaja6zrDQaiTThs2cDw0o O4GuUjtiL8BWmtJbUABVXggMDL/7+GKWVSiRDpWo8LmAnqDhk/KKZqMnrtKZjNurtlgp wOgKmSZsBy/N59qD4/pl4Mbo54uiZSW4o28FMyMZmzhkMJY8o9uS9Mze60pdDLRNysmZ vZMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :mime-version:content-transfer-encoding; bh=uUZ1B/7xqCMbgStyA7Ho3ErPD4yvMkTwqgpBaMdYV0U=; b=NfPclFcDVpZU6f9bdGWNgiCYM+oBZBItj1acvnjcirClz5vFMeyqZ6jejLtWQ37lnL K3NM0eXhFlkZJF/EQ/c8oAlHKz3vKo62Av6TX5iRPNjtJEy4ABVv7g76+HKUSMvKpVQX eAntL/6ZY8ryUbklsdqEKVkx710rPRqBCKktWt0pjZ6PFcDXbLBw1+sqtGYnkgOR12NR MqHxUSo27ABQr8t+vc8JsFFeIvdz/oyMQEIz4OYi/MSZ0S6hyNeMPQU8nPAhAcKpBAJI mBtUqqAAo1pUsiR3K+qkH/UTdBrf7LiiEh+0y/kY9lj0tXQR4tDTG5VaUZRSvQ7Db3WK aT0g== X-Gm-Message-State: AOAM532fnjZSuDMvPabUE9W8RfNJehvuMlidaMH+yoza4QuT4GF7rkdA QGOIGk6FpqBfVSkkUGzs0dw= X-Google-Smtp-Source: ABdhPJwtffE+IeAsEVeyvuHb+1skKZb8DKpXEQTd9yX/lyO3WcfRJd+tAKuWY50b9z4pAqkHdTtNbg== X-Received: by 2002:a05:6a00:2402:b0:4e1:46ca:68bd with SMTP id z2-20020a056a00240200b004e146ca68bdmr1431739pfh.70.1652388107535; Thu, 12 May 2022 13:41:47 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:872f:bbca:8e23:fae5]) by smtp.gmail.com with ESMTPSA id l6-20020a170903244600b00160d358a888sm90912pls.32.2022.05.12.13.41.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 13:41:46 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , "Paul E . McKenney" , John Hubbard , John Dias , Minchan Kim , David Hildenbrand Subject: [PATCH v5] mm: fix is_pinnable_page against on cma page Date: Thu, 12 May 2022 13:41:43 -0700 Message-Id: <20220512204143.3961150-1-minchan@kernel.org> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=itmq0KKZ; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf05.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 71EDC1000B1 X-Rspam-User: X-Stat-Signature: rnu1n6anqgh6uuzutqm8f6rxb3xypopj X-HE-Tag: 1652388089-697385 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pages on CMA area could have MIGRATE_ISOLATE as well as MIGRATE_CMA so current is_pinnable_page could miss CMA pages which has MIGRATE_ ISOLATE. It ends up pinning CMA pages as longterm at pin_user_pages APIs so CMA allocation keep failed until the pin is released. CPU 0 CPU 1 - Task B cma_alloc alloc_contig_range pin_user_pages_fast(FOLL_LONGTERM) change pageblock as MIGRATE_ISOLATE internal_get_user_pages_fast lockless_pages_from_mm gup_pte_range try_grab_folio is_pinnable_page return true; So, pinned the page successfully. page migration failure with pinned page .. .. After 30 sec unpin_user_page(page) CMA allocation succeeded after 30 sec. The CMA allocation path protects the migration type change race using zone->lock but what GUP path need to know is just whether the page is on CMA area or not rather than exact migration type. Thus, we don't need zone->lock but just checks migration type in either of (MIGRATE_ISOLATE and MIGRATE_CMA). Adding the MIGRATE_ISOLATE check in is_pinnable_page could cause rejecting of pinning pages on MIGRATE_ISOLATE pageblocks even though it's neither CMA nor movable zone if the page is temporarily unmovable. However, such a migration failure by unexpected temporal refcount holding is general issue, not only come from MIGRATE_ISOLATE and the MIGRATE_ISOLATE is also transient state like other temporal elevated refcount problem. Cc: "Paul E . McKenney" Cc: John Hubbard Cc: David Hildenbrand Signed-off-by: Minchan Kim Reviewed-by: John Hubbard Reviewed-by: John Hubbard Signed-off-by: Minchan Kim --- * from v4 - https://lore.kernel.org/all/20220510211743.95831-1-minchan@kernel.org/ * clarification why we need READ_ONCE - Paul * Adding a comment about READ_ONCE - John * from v3 - https://lore.kernel.org/all/20220509153430.4125710-1-minchan@kernel.org/ * Fix typo and adding more description - akpm * from v2 - https://lore.kernel.org/all/20220505064429.2818496-1-minchan@kernel.org/ * Use __READ_ONCE instead of volatile - akpm * from v1 - https://lore.kernel.org/all/20220502173558.2510641-1-minchan@kernel.org/ * fix build warning - lkp * fix refetching issue of migration type * add side effect on !ZONE_MOVABLE and !MIGRATE_CMA in description - david include/linux/mm.h | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6acca5cecbc5..2d7a5d87decd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1625,8 +1625,20 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, #ifdef CONFIG_MIGRATION static inline bool is_pinnable_page(struct page *page) { - return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || - is_zero_pfn(page_to_pfn(page)); +#ifdef CONFIG_CMA + /* + * Defend against future compiler LTO features, or code refactoring + * that inlines the above function, by forcing a single read. Because, + * this routine races with set_pageblock_migratetype(), and we want to + * avoid reading zero, when actually one or the other flags was set. + */ + int mt = __READ_ONCE(__mt); + + if (mt & (MIGRATE_CMA | MIGRATE_ISOLATE)) + return false; +#endif + + return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page))); } #else static inline bool is_pinnable_page(struct page *page)