From patchwork Fri Mar 1 21:47:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13579164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80812C5478C for ; Fri, 1 Mar 2024 21:47:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB4436B009D; Fri, 1 Mar 2024 16:47:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D64886B009F; Fri, 1 Mar 2024 16:47:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2D316B00A0; Fri, 1 Mar 2024 16:47:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AE51F6B009D for ; Fri, 1 Mar 2024 16:47:19 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 82FBE1A0512 for ; Fri, 1 Mar 2024 21:47:19 +0000 (UTC) X-FDA: 81849806598.15.74BE445 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id ED14240006 for ; Fri, 1 Mar 2024 21:47:17 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VmUiyKwR; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709329638; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lAm88NCh7qCyZozAaLMF5xc2naIQx+TsIZO0GlDocOQ=; b=lvHu1WLnRS3tOYb53d6tQH7wcF+FyGUGwmcCTA/PYrik81K8wOH9OQNdQ3EmJy338AGTRv 7DK9n12D+V54m8wfac22yk3wGaxbECxd32EmJXKkxpFsHMUFNdYZafRSAE6lNyglZ/8x2p O4Rcg4n/6ZTt1vYTixk0lAUYumwz4ls= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709329638; a=rsa-sha256; cv=none; b=iaWRWkdb4WEpu7FH/zpGJpl3Cnuubv690SYQulmPdnmh603clEkgk2sX1Qm0M5gUdZn4vD TWQyvqCT15jRLF54kZer4UwuQ3cEkGnjsIrBeef0zPwGnf2diDgQYQZ03eFqVPDnxKggwx xbO06oJ2XBSxQUFql40fZutEguZLTj8= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VmUiyKwR; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lAm88NCh7qCyZozAaLMF5xc2naIQx+TsIZO0GlDocOQ=; b=VmUiyKwRbFkgNyqK0y5t4p7bph yBavLscAMxee/Z7kaZX6UntAWDDMV7a1BjbjxIp74Cn+SWXc579e+63eOQiYj6A1lkSDgMPRi+SHi K8IUy0GrNAmYhjIq0EbBWvjDWgkOuIH0qi7o70Y5J3mPMR6tzlh1Ckpc3CJqf7UmBo2NmGGbUsoNo eAy8Uyf2qXp/xl96vbtu7gEsAHfC734IKZrm8SvojItzgeKGpBt0/H+pKerJ0Vd4Mrke0M4JmqnCa oRwZcyHpnrzRDpvxM8EXfIcEDjRkFKL8JL4hDXgw4TClS8Wkxiiw5YcMtSTt7dHaTegSAn8cW6cRJ GZTx1W/g==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgAin-0000000ByEh-2NQT; Fri, 01 Mar 2024 21:47:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Oscar Salvador Subject: [PATCH 1/5] hugetlb: Make folio_test_hugetlb safer to call Date: Fri, 1 Mar 2024 21:47:06 +0000 Message-ID: <20240301214712.2853147-2-willy@infradead.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240301214712.2853147-1-willy@infradead.org> References: <20240301214712.2853147-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: cfqtk6cofaaekrdxn1xysefikp3zdqjk X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: ED14240006 X-Rspam-User: X-HE-Tag: 1709329637-367744 X-HE-Meta: U2FsdGVkX18o7J5Yf1/drz/C2ApKzlbh9yeV/ECzFZCzXdVCkS4b4DlmFk5RMjL2XrWVEdwZWHSHZHJ6knlfQjKmDLGYmdoorRDlNb5YveTG8ZlPDAeht/U+nCONI6wOdHtvV7xxMDE+6igPaTGYqjQIWUweScmeUk+XegJMaedNUFVxH6J3hkLaVKXkBbwNfT4l3tRZris8cljsQVQxuXbyYCpZASZ/Oq4GDO4crzSd/Tbc4eg5CU59ygSyTYsgs6giDhaDo90c4Bd7uWyz4vx7gmH7hM9oU2rrZDqPbpeC/ScB/qGUWu4MyA8HGUoRKcdrGxxPzX/6TdBvIS8buu8utgGWF1+i0OGS3p4/1eQj6NY6+52x+r0RhpbQt7Ssue/k2lgrdV8yDBINHHY/mhzYcY6GM/1YI25VB5tpZIJV4MupwSxLaExoM81gsylRzuDl7q6AK7jWUr3iGV5r3nNBeHYm94zHMbKroGt3o8H89iI/zSytzXSsvmYN+9VcIwQunat5fnp3tI7kasoGe+fFEwYqRlfoULXShcEZIDiVJ2VktApnYqDgHkLUbPD2wjGoow6QU+39r+owjWY1wvh5sfcR/49Z2zVsT3JaA5AJeMW/cvZJZCAjivQPysZs9E/BBmt3AYPwWlx81qxBtlhQRxZNg7+57dc/3TYiM2uP5cValOzS/xOiqc3eI7yYbOLFDaUi18EEEvWP5fSg2xryZyNljNh6gc44E0Qx9yoohdbP486XNZREwZXaVUBKUWONmF6F/BUacqUFJlAuS5PUU3W5SKQDwV3rbI1zh0zbVHmCbN7jLORzjh+sW3MgaCLHRQJNfnmWO/1KlDJwMPaUJj+gQqBPigm4IDO3S1zHmpoYnC+ybEylDImrZoiEmnZZ5B/bI590cSXreK2tHe17LxLVQTpmWNBnBHWkgc8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: At least two places (memory failure and page migration) need to call folio_test_hugetlb() without a reference on the folio. This can currently result in false positives (returning true when the folio doesn't belong to hugetlb) and more commonly in VM_BUG_ON() when a folio is split. The new way to distinguish a hugetlb folio is to see if (1) the page is compound (or the folio is large) and (2) page[1].mapping is set to the address of hugetlb_lock. If the folio is (or has been) large then page[1] is guaranteed to exist. If the folio is split between the two tests, page[1].mapping will be set to something which definitely isn't the address of hugetlb_lock. Because we shift around the layout of struct folio a bit, we now use page[1].private, which means we need to adjust __split_huge_page_tail() a little. We also need to annoy the vmcore_info people again. Sorry. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Oscar Salvador --- include/linux/mm_types.h | 4 +++- include/linux/page-flags.h | 25 ++++--------------------- kernel/vmcore_info.c | 3 ++- mm/huge_memory.c | 10 ++-------- mm/hugetlb.c | 24 ++++++++++++++++++++++++ 5 files changed, 35 insertions(+), 31 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index a7223ba3ea1e..fd80bf8b5d8a 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -289,6 +289,7 @@ typedef struct { * @virtual: Virtual address in the kernel direct map. * @_last_cpupid: IDs of last CPU and last process that accessed the folio. * @_entire_mapcount: Do not use directly, call folio_entire_mapcount(). + * @large_id: May identify the type of a large folio. * @_nr_pages_mapped: Do not use directly, call folio_mapcount(). * @_pincount: Do not use directly, call folio_maybe_dma_pinned(). * @_folio_nr_pages: Do not use directly, call folio_nr_pages(). @@ -348,9 +349,9 @@ struct folio { struct { unsigned long _flags_1; unsigned long _head_1; - unsigned long _folio_avail; /* public: */ atomic_t _entire_mapcount; + void *large_id; atomic_t _nr_pages_mapped; atomic_t _pincount; #ifdef CONFIG_64BIT @@ -407,6 +408,7 @@ FOLIO_MATCH(_last_cpupid, _last_cpupid); offsetof(struct page, pg) + sizeof(struct page)) FOLIO_MATCH(flags, _flags_1); FOLIO_MATCH(compound_head, _head_1); +FOLIO_MATCH(mapping, large_id); #undef FOLIO_MATCH #define FOLIO_MATCH(pg, fl) \ static_assert(offsetof(struct folio, fl) == \ diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 652d77805e99..75bead4a5f09 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -188,10 +188,9 @@ enum pageflags { * PF_ANY. */ + PG_large_rmappable = PG_active, /* anon or file-backed */ /* At least one page in this folio has the hwpoison flag set */ - PG_has_hwpoisoned = PG_error, - PG_hugetlb = PG_active, - PG_large_rmappable = PG_workingset, /* anon or file-backed */ + PG_has_hwpoisoned = PG_workingset, }; #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) @@ -857,23 +856,7 @@ TESTPAGEFLAG_FALSE(LargeRmappable, large_rmappable) #ifdef CONFIG_HUGETLB_PAGE int PageHuge(const struct page *page); -SETPAGEFLAG(HugeTLB, hugetlb, PF_SECOND) -CLEARPAGEFLAG(HugeTLB, hugetlb, PF_SECOND) - -/** - * folio_test_hugetlb - Determine if the folio belongs to hugetlbfs - * @folio: The folio to test. - * - * Context: Any context. Caller should have a reference on the folio to - * prevent it from being turned into a tail page. - * Return: True for hugetlbfs folios, false for anon folios or folios - * belonging to other filesystems. - */ -static inline bool folio_test_hugetlb(const struct folio *folio) -{ - return folio_test_large(folio) && - test_bit(PG_hugetlb, const_folio_flags(folio, 1)); -} +bool folio_test_hugetlb(const struct folio *folio); #else TESTPAGEFLAG_FALSE(Huge, hugetlb) #endif @@ -1118,7 +1101,7 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page) */ #define PAGE_FLAGS_SECOND \ (0xffUL /* order */ | 1UL << PG_has_hwpoisoned | \ - 1UL << PG_hugetlb | 1UL << PG_large_rmappable) + 1UL << PG_large_rmappable) #define PAGE_FLAGS_PRIVATE \ (1UL << PG_private | 1UL << PG_private_2) diff --git a/kernel/vmcore_info.c b/kernel/vmcore_info.c index f95516cd45bb..453cd44a9c9c 100644 --- a/kernel/vmcore_info.c +++ b/kernel/vmcore_info.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -206,7 +207,7 @@ static int __init crash_save_vmcoreinfo_init(void) #define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy) VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE); #ifdef CONFIG_HUGETLB_PAGE - VMCOREINFO_NUMBER(PG_hugetlb); + VMCOREINFO_NUMBER(&hugetlb_lock); #define PAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline) VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE); #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a81a09236c16..5731f28cba5f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2840,16 +2840,10 @@ static void __split_huge_page_tail(struct folio *folio, int tail, page_tail->mapping = head->mapping; page_tail->index = head->index + tail; - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } if (folio_test_swapcache(folio)) new_folio->swap.val = folio->swap.val + tail; + else + new_folio->private = NULL; /* Page flags must be visible before we make the page non-compound. */ smp_wmb(); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bb17e5c22759..963c25963b5e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -100,6 +100,30 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma, unsigned long start, unsigned long end); static struct resv_map *vma_resv_map(struct vm_area_struct *vma); +static void folio_set_hugetlb(struct folio *folio) +{ + folio->large_id = &hugetlb_lock; +} + +static void folio_clear_hugetlb(struct folio *folio) +{ + folio->large_id = NULL; +} + +/** + * folio_test_hugetlb - Determine if the folio belongs to hugetlbfs. + * @folio: The folio to test. + * + * Context: Any context. + * Return: True for hugetlbfs folios, false for anon folios or folios + * belonging to other filesystems. + */ +bool folio_test_hugetlb(const struct folio *folio) +{ + return folio_test_large(folio) && folio->large_id == &hugetlb_lock; +} +EXPORT_SYMBOL_GPL(folio_test_hugetlb); + static inline bool subpool_is_free(struct hugepage_subpool *spool) { if (spool->count)