From patchwork Sun Nov 6 14:03:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 13033411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA410C4332F for ; Sun, 6 Nov 2022 14:05:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 562E18E0005; Sun, 6 Nov 2022 09:05:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 512B78E0001; Sun, 6 Nov 2022 09:05:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 366138E0005; Sun, 6 Nov 2022 09:05:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2437B8E0001 for ; Sun, 6 Nov 2022 09:05:46 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F281C1209F6 for ; Sun, 6 Nov 2022 14:05:45 +0000 (UTC) X-FDA: 80103190650.13.6370A40 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf24.hostedemail.com (Postfix) with ESMTP id F2579180006 for ; Sun, 6 Nov 2022 14:05:43 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id j12so8905394plj.5 for ; Sun, 06 Nov 2022 06:05:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Sf5S0atFKfHkqRWDCxWn/Gm6hWvZjwwYCIjZEsOm5Hs=; b=QXfu1mVJPOuoNB8ZOIoS/GFi64prErCNxOu+cMPf/ykF79Kr6nqknp0nRN41TZpoWT qeIhDV5uJei4W2zn3A0uMJ1oexGnj96qXuqNQQ/PqBr5J7dQ6xQg65d86P/hhcqlrnZk gML9wg4Lr2D+OpLrYJ/VF7fBUadhqo4chmkfw+jkErXUuLft8lhniSJFGZj0Rz8Ixg9M X71n90NbqhfhkSDrt8NbsYmjXL6pRnugq2+jiGAyAKZscMD2dp5m65CEUZ/j8hcsBRPc UdznExQSwOnR/NunCCqezWH+hzxjfnNknGchJ4PeBWJsLFPDn2+LSxHwXFzH6Cta6+UK oQaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Sf5S0atFKfHkqRWDCxWn/Gm6hWvZjwwYCIjZEsOm5Hs=; b=lnkRN5IiNxh+MbmJ+iwcbhFBgUhlXuMWBLewbCgLt4WExMR6SajOsIkKGnJ2rQYbgC QpwxXkLp7mdhTWp22UTeZbHqSr+q/LKSPEeUfxBfkzVVNrIKdXMU8wzOG2q1nSO9y5fU FINi9aJ0L8S1NafMFs3NN6moXkAswOSr+ObT7KXVUYGf32GZfCH+lk4QiaA0LJw8d6Fz V76FmXVrblNGgTUBi+su12uHgToAFH112S8fFYm48RKMDUO1PfzO4QpP1ohHwgUcKYU7 Dwq3WkJcLlM7flXQ2i0BvmgfcEwmMHhSznf3He780ZpFmksQec9qO1i3JBfA+BS4I1Mg MjHQ== X-Gm-Message-State: ACrzQf1gLD9I+GvPV+r67rtgcs3oxbcbw7ay7x2Qwn2mgOKpoMRlhu85 0Pjr8ileRRyhLZcUYMO3+EZCFTzuW1GTZQ== X-Google-Smtp-Source: AMsMyM6iF6jiQknQRB/tHerY7gE3/0owoNNynI0K8TXyaXdZnBypq8QYe+MDFLZ9SCRaCK7/uNjAww== X-Received: by 2002:a17:90b:954:b0:213:c01:b8ce with SMTP id dw20-20020a17090b095400b002130c01b8cemr64266260pjb.168.1667743542396; Sun, 06 Nov 2022 06:05:42 -0800 (PST) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d14-20020a170902654e00b00172f6726d8esm3156197pln.277.2022.11.06.06.05.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Nov 2022 06:05:41 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Vlastimil Babka , Naoya Horiguchi , Miaohe Lin , Matthew Wilcox , Minchan Kim , Mel Gorman , Andrea Arcangeli , Dan Williams , Hugh Dickins , Muchun Song , David Hildenbrand , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [RFC v2 1/3] mm: move PG_slab flag to page_type Date: Sun, 6 Nov 2022 23:03:53 +0900 Message-Id: <20221106140355.294845-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20221106140355.294845-1-42.hyeyoo@gmail.com> References: <20221106140355.294845-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QXfu1mVJ; spf=pass (imf24.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667743544; a=rsa-sha256; cv=none; b=e70wJ4UfbJTpOHFex8H2gdOR1IB8DkE33ong4B6i12OuWK+NCdefrcyOjYjMdd0kkjHiS6 kqjbjuY0wUNE/I57fD9MPJLKFPyjMlTsHDzJpZPaRtAompvihzP+HvneBodtismeGorx0J HCKjk4wIAPWWg3K4afOfN+L4f/ZxDVk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667743544; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Sf5S0atFKfHkqRWDCxWn/Gm6hWvZjwwYCIjZEsOm5Hs=; b=JsP85B3GB1oielDozM0wbMtNc7sVxf2oWpoQlO5djErbz9SugcPs5tq4IJkssmmtYxuoB4 X3knFsxX3syB087R/B+rl4lE74+FRIe+bu8HHe33E0te/vEZRasJCbVegoiJ4PsOJWWwyM 2Mba8FvA/PViBzukLhUR4oQtdZLOSts= X-Stat-Signature: ep6mzecia1igm7momjk5cjwda1jx596t X-Rspamd-Queue-Id: F2579180006 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QXfu1mVJ; spf=pass (imf24.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1667743543-15626 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For now, only SLAB uses _mapcount field as a number of active objects in a slab, and other slab allocators do not use it. As 16 bits are enough for that, use remaining 16 bits of _mapcount as page_type even when SLAB is used. And then move PG_slab flag to page_type. As suggested by Matthew, store number of active objects in negative form and use helper when accessing or modifying it. Note that page_type is always placed in upper 16 bits of _mapcount to avoid confusing normal _mapcount as page_type. As underflow (actually I mean, yeah, overflow) is not a concern anymore, use more lower bits. Add more folio helpers for PAGE_TYPE_OPS() not to break existing slab implementations. Remove PG_slab check from PAGE_FLAGS_CHECK_AT_FREE. buddy will still check if _mapcount is properly set at free. Exclude PG_slab from hwpoison and show_page_flags() for now. Note that with this patch, page_mapped() and folio_mapped() always return false for slab page. Cc: Andrew Morton Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Vlastimil Babka Cc: Naoya Horiguchi Cc: Miaohe Lin Cc: "Matthew Wilcox (Oracle)" Cc: Minchan Kim Cc: Mel Gorman Cc: Andrea Arcangeli Cc: Dan Williams Cc: Hugh Dickins Cc: Muchun Song Cc: David Hildenbrand Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- fs/proc/page.c | 13 ++---- include/linux/mm_types.h | 11 +++-- include/linux/page-flags.h | 77 ++++++++++++++++++++++++---------- include/trace/events/mmflags.h | 1 - kernel/crash_core.c | 3 +- mm/memory-failure.c | 8 ---- mm/slab.c | 44 ++++++++++++------- mm/slab.h | 3 +- 8 files changed, 98 insertions(+), 62 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index f2273b164535..101be8d5a74e 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -67,7 +67,7 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf, */ ppage = pfn_to_online_page(pfn); - if (!ppage || PageSlab(ppage) || page_has_type(ppage)) + if (!ppage || page_has_type(ppage)) pcount = 0; else pcount = page_mapcount(ppage); @@ -124,11 +124,8 @@ u64 stable_page_flags(struct page *page) /* * pseudo flags for the well known (anonymous) memory mapped pages - * - * Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the - * simple test in page_mapped() is not enough. */ - if (!PageSlab(page) && page_mapped(page)) + if (page_mapped(page)) u |= 1 << KPF_MMAP; if (PageAnon(page)) u |= 1 << KPF_ANON; @@ -178,16 +175,14 @@ u64 stable_page_flags(struct page *page) u |= 1 << KPF_OFFLINE; if (PageTable(page)) u |= 1 << KPF_PGTABLE; + if (PageSlab(page)) + u |= 1 << KPF_SLAB; if (page_is_idle(page)) u |= 1 << KPF_IDLE; u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked); - u |= kpf_copy_bit(k, KPF_SLAB, PG_slab); - if (PageTail(page) && PageSlab(compound_head(page))) - u |= 1 << KPF_SLAB; - u |= kpf_copy_bit(k, KPF_ERROR, PG_error); u |= kpf_copy_bit(k, KPF_DIRTY, PG_dirty); u |= kpf_copy_bit(k, KPF_UPTODATE, PG_uptodate); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 834022721bc6..2f298d1b8cf5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -196,10 +196,13 @@ struct page { atomic_t _mapcount; /* - * If the page is neither PageSlab nor mappable to userspace, - * the value stored here may help determine what this page - * is used for. See page-flags.h for a list of page types - * which are currently stored here. + * If the page is not mappable to userspace, the value + * stored here may help determine what this page is used for. + * See page-flags.h for a list of page types which are currently + * stored here. + * + * Note that only upper half is used for page types and lower + * half is reserved for SLAB. */ unsigned int page_type; }; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 0b0ae5084e60..31dda492cda5 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -107,7 +107,6 @@ enum pageflags { PG_workingset, PG_waiters, /* Page has waiters, check its waitqueue. Must be bit #7 and in the same byte as "PG_locked" */ PG_error, - PG_slab, PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/ PG_arch_1, PG_reserved, @@ -484,7 +483,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) TESTCLEARFLAG(Active, active, PF_HEAD) PAGEFLAG(Workingset, workingset, PF_HEAD) TESTCLEARFLAG(Workingset, workingset, PF_HEAD) -__PAGEFLAG(Slab, slab, PF_NO_TAIL) __PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL) PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */ @@ -926,42 +924,72 @@ static inline bool is_page_hwpoison(struct page *page) } /* - * For pages that are never mapped to userspace (and aren't PageSlab), - * page_type may be used. Because it is initialised to -1, we invert the - * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and - * __ClearPageFoo *sets* the bit used for PageFoo. We reserve a few high and - * low bits so that an underflow or overflow of page_mapcount() won't be - * mistaken for a page type value. + * For pages that are never mapped to userspace, page_type may be used. + * Because it is initialised to -1, we invert the sense of the bit, + * so __SetPageFoo *clears* the bit used for PageFoo, and __ClearPageFoo + * *sets* the bit used for PageFoo. We reserve a few high and low bits + * so that an underflow or overflow of page_mapcount() won't be mistaken + * for a page type value. */ #define PAGE_TYPE_BASE 0xf0000000 -/* Reserve 0x0000007f to catch underflows of page_mapcount */ -#define PAGE_MAPCOUNT_RESERVE -128 -#define PG_buddy 0x00000080 -#define PG_offline 0x00000100 -#define PG_table 0x00000200 -#define PG_guard 0x00000400 +#define PG_buddy 0x00010000 +#define PG_offline 0x00020000 +#define PG_table 0x00040000 +#define PG_guard 0x00080000 +#define PG_slab 0x00100000 #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) -static inline int page_has_type(struct page *page) +#define PAGE_TYPE_MASK 0xffff0000 + +static inline bool page_type_has_type(unsigned int page_type) { - return (int)page->page_type < PAGE_MAPCOUNT_RESERVE; + return ((int)page_type < (int)PAGE_TYPE_MASK); } -#define PAGE_TYPE_OPS(uname, lname) \ +static inline bool page_has_type(struct page *page) +{ + return page_type_has_type(page->page_type); +} + + +#define PAGE_TYPE_OPS(uname, lname, policy) \ static __always_inline int Page##uname(struct page *page) \ { \ + page = policy(page, 0); \ + return PageType(page, PG_##lname); \ +} \ +static __always_inline int folio_test_##lname(struct folio *folio) \ +{ \ + struct page *page = &folio->page; \ + \ return PageType(page, PG_##lname); \ } \ static __always_inline void __SetPage##uname(struct page *page) \ { \ + page = policy(page, 1); \ + VM_BUG_ON_PAGE(!PageType(page, 0), page); \ + page->page_type &= ~PG_##lname; \ +} \ +static __always_inline void __folio_set_##lname(struct folio *folio) \ +{ \ + struct page *page = &folio->page; \ + \ VM_BUG_ON_PAGE(!PageType(page, 0), page); \ page->page_type &= ~PG_##lname; \ } \ static __always_inline void __ClearPage##uname(struct page *page) \ { \ + page = policy(page, 1); \ + VM_BUG_ON_PAGE(!Page##uname(page), page); \ + page->page_type |= PG_##lname; \ +} \ +static __always_inline void __folio_clear_##lname(struct folio *folio) \ +{ \ + struct page *page = &folio->page; \ + \ VM_BUG_ON_PAGE(!Page##uname(page), page); \ page->page_type |= PG_##lname; \ } @@ -970,7 +998,7 @@ static __always_inline void __ClearPage##uname(struct page *page) \ * PageBuddy() indicates that the page is free and in the buddy system * (see mm/page_alloc.c). */ -PAGE_TYPE_OPS(Buddy, buddy) +PAGE_TYPE_OPS(Buddy, buddy, PF_ANY) /* * PageOffline() indicates that the page is logically offline although the @@ -994,7 +1022,10 @@ PAGE_TYPE_OPS(Buddy, buddy) * pages should check PageOffline() and synchronize with such drivers using * page_offline_freeze()/page_offline_thaw(). */ -PAGE_TYPE_OPS(Offline, offline) +PAGE_TYPE_OPS(Offline, offline, PF_ANY) + +/* PageSlab() indicates that the page is used by slab subsystem. */ +PAGE_TYPE_OPS(Slab, slab, PF_NO_TAIL) extern void page_offline_freeze(void); extern void page_offline_thaw(void); @@ -1004,12 +1035,12 @@ extern void page_offline_end(void); /* * Marks pages in use as page tables. */ -PAGE_TYPE_OPS(Table, table) +PAGE_TYPE_OPS(Table, table, PF_ANY) /* * Marks guardpages used with debug_pagealloc. */ -PAGE_TYPE_OPS(Guard, guard) +PAGE_TYPE_OPS(Guard, guard, PF_ANY) extern bool is_free_buddy_page(struct page *page); @@ -1057,8 +1088,8 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page) (1UL << PG_lru | 1UL << PG_locked | \ 1UL << PG_private | 1UL << PG_private_2 | \ 1UL << PG_writeback | 1UL << PG_reserved | \ - 1UL << PG_slab | 1UL << PG_active | \ - 1UL << PG_unevictable | __PG_MLOCKED | LRU_GEN_MASK) + 1UL << PG_active | 1UL << PG_unevictable | \ + __PG_MLOCKED | LRU_GEN_MASK) /* * Flags checked when a page is prepped for return by the page allocator. diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 11524cda4a95..72c11a16f771 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -112,7 +112,6 @@ {1UL << PG_lru, "lru" }, \ {1UL << PG_active, "active" }, \ {1UL << PG_workingset, "workingset" }, \ - {1UL << PG_slab, "slab" }, \ {1UL << PG_owner_priv_1, "owner_priv_1" }, \ {1UL << PG_arch_1, "arch_1" }, \ {1UL << PG_reserved, "reserved" }, \ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index a0eb4d5cf557..f72437e4192f 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -479,13 +479,14 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_NUMBER(PG_private); VMCOREINFO_NUMBER(PG_swapcache); VMCOREINFO_NUMBER(PG_swapbacked); - VMCOREINFO_NUMBER(PG_slab); #ifdef CONFIG_MEMORY_FAILURE VMCOREINFO_NUMBER(PG_hwpoison); #endif VMCOREINFO_NUMBER(PG_head_mask); #define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy) VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE); +#define PAGE_SLAB_MAPCOUNT_VALUE (~PG_slab) + VMCOREINFO_NUMBER(PAGE_SLAB_MAPCOUNT_VALUE); #ifdef CONFIG_HUGETLB_PAGE VMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR); #define PAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 779a426d2cab..9494f47c4cee 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1145,7 +1145,6 @@ static int me_huge_page(struct page_state *ps, struct page *p) #define mlock (1UL << PG_mlocked) #define lru (1UL << PG_lru) #define head (1UL << PG_head) -#define slab (1UL << PG_slab) #define reserved (1UL << PG_reserved) static struct page_state error_states[] = { @@ -1155,13 +1154,6 @@ static struct page_state error_states[] = { * PG_buddy pages only make a small fraction of all free pages. */ - /* - * Could in theory check if slab page is free or if we can drop - * currently unused objects without touching them. But just - * treat it as standard kernel for now. - */ - { slab, slab, MF_MSG_SLAB, me_kernel }, - { head, head, MF_MSG_HUGE, me_huge_page }, { sc|dirty, sc|dirty, MF_MSG_DIRTY_SWAPCACHE, me_swapcache_dirty }, diff --git a/mm/slab.c b/mm/slab.c index 59c8e28f7b6a..da12e82aba41 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2265,6 +2265,21 @@ void __kmem_cache_release(struct kmem_cache *cachep) } } +static inline unsigned int slab_get_active(struct slab *slab) +{ + return ~(slab->page_type | PG_slab); +} + +static inline void slab_inc_active(struct slab *slab) +{ + slab->page_type--; +} + +static inline void slab_dec_active(struct slab *slab) +{ + slab->page_type++; +} + /* * Get the memory for a slab management obj. * @@ -2287,7 +2302,6 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep, void *addr = slab_address(slab); slab->s_mem = addr + colour_off; - slab->active = 0; if (OBJFREELIST_SLAB(cachep)) freelist = NULL; @@ -2506,8 +2520,8 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab) { void *objp; - objp = index_to_obj(cachep, slab, get_free_obj(slab, slab->active)); - slab->active++; + objp = index_to_obj(cachep, slab, get_free_obj(slab, slab_get_active(slab))); + slab_inc_active(slab); return objp; } @@ -2520,7 +2534,7 @@ static void slab_put_obj(struct kmem_cache *cachep, unsigned int i; /* Verify double free bug */ - for (i = slab->active; i < cachep->num; i++) { + for (i = slab_get_active(slab); i < cachep->num; i++) { if (get_free_obj(slab, i) == objnr) { pr_err("slab: double free detected in cache '%s', objp %px\n", cachep->name, objp); @@ -2528,11 +2542,11 @@ static void slab_put_obj(struct kmem_cache *cachep, } } #endif - slab->active--; + slab_dec_active(slab); if (!slab->freelist) slab->freelist = objp + obj_offset(cachep); - set_free_obj(slab, slab->active, objnr); + set_free_obj(slab, slab_get_active(slab), objnr); } /* @@ -2631,14 +2645,14 @@ static void cache_grow_end(struct kmem_cache *cachep, struct slab *slab) spin_lock(&n->list_lock); n->total_slabs++; - if (!slab->active) { + if (!slab_get_active(slab)) { list_add_tail(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else fixup_slab_list(cachep, n, slab, &list); STATS_INC_GROWN(cachep); - n->free_objects += cachep->num - slab->active; + n->free_objects += cachep->num - slab_get_active(slab); spin_unlock(&n->list_lock); fixup_objfreelist_debug(cachep, &list); @@ -2740,7 +2754,7 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, { /* move slabp to correct slabp list: */ list_del(&slab->slab_list); - if (slab->active == cachep->num) { + if (slab_get_active(slab) == cachep->num) { list_add(&slab->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG @@ -2779,7 +2793,7 @@ static noinline struct slab *get_valid_first_slab(struct kmem_cache_node *n, /* Move pfmemalloc slab to the end of list to speed up next search */ list_del(&slab->slab_list); - if (!slab->active) { + if (!slab_get_active(slab)) { list_add_tail(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else @@ -2861,9 +2875,9 @@ static __always_inline int alloc_block(struct kmem_cache *cachep, * There must be at least one object available for * allocation. */ - BUG_ON(slab->active >= cachep->num); + BUG_ON(slab_get_active(slab) >= cachep->num); - while (slab->active < cachep->num && batchcount--) { + while (slab_get_active(slab) < cachep->num && batchcount--) { STATS_INC_ALLOCED(cachep); STATS_INC_ACTIVE(cachep); STATS_SET_HIGH(cachep); @@ -3158,7 +3172,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, STATS_INC_ACTIVE(cachep); STATS_SET_HIGH(cachep); - BUG_ON(slab->active == cachep->num); + BUG_ON(slab_get_active(slab) == cachep->num); obj = slab_get_obj(cachep, slab); n->free_objects--; @@ -3292,7 +3306,7 @@ static void free_block(struct kmem_cache *cachep, void **objpp, STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ - if (slab->active == 0) { + if (slab_get_active(slab) == 0) { list_add(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else { @@ -3347,7 +3361,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) struct slab *slab; list_for_each_entry(slab, &n->slabs_free, slab_list) { - BUG_ON(slab->active); + BUG_ON(slab_get_active(slab); i++; } diff --git a/mm/slab.h b/mm/slab.h index 0202a8c2f0d2..f9df0fc3a918 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -18,7 +18,8 @@ struct slab { struct kmem_cache *slab_cache; void *freelist; /* array of free object indexes */ void *s_mem; /* first object */ - unsigned int active; + /* lower half of page_type is used as active objects counter */ + unsigned int page_type; #elif defined(CONFIG_SLUB) From patchwork Sun Nov 6 14:03:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 13033412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76B4EC4332F for ; Sun, 6 Nov 2022 14:05:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 192368E0006; Sun, 6 Nov 2022 09:05:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 143378E0001; Sun, 6 Nov 2022 09:05:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00A488E0006; Sun, 6 Nov 2022 09:05:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E52928E0001 for ; Sun, 6 Nov 2022 09:05:50 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B359DA0565 for ; Sun, 6 Nov 2022 14:05:50 +0000 (UTC) X-FDA: 80103190860.04.86E22D2 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf15.hostedemail.com (Postfix) with ESMTP id 4562AA000A for ; Sun, 6 Nov 2022 14:05:50 +0000 (UTC) Received: by mail-pg1-f169.google.com with SMTP id b62so8325437pgc.0 for ; Sun, 06 Nov 2022 06:05:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t8vuXRR2q0PXI51qIeJ1ea3Gf25G2oUdKx9V5FzmI1s=; b=OlxHZfQAqgsuCoT68wC88NFnxrimfiTVtjIXWHeUGtmfL+mra7xwOMN/vDJo4h4on+ OaHX3IOelEuuYMoUVshYgIkc+zWLvHa50BhgnniWjh7WOWlPWnDkzyg3SGQXj6ypgjKn ckuzFSKppkjjPxt5JQSTkkVJqDugpO3+j/AvXHJcNZWSLpNT7RX3nssomwxg9mhPLt6i yy8aA26gddslG/eSCZcYhWnUzYKMz1r7pJLV3WdyT7allxBai3+AWC+ZZr3eamsHXRUP nP9FuER05SSHNlJz3cvIgX+4gioMzwbgxNkSQTSCURUZGolHXZMI+EmMeGzSewfO+jKp af1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t8vuXRR2q0PXI51qIeJ1ea3Gf25G2oUdKx9V5FzmI1s=; b=b5IzXduIVAKi7j2WcU3w7J9Pn1V14kUUg6AjiBUiFqykd1Puj1UocREv9xSiRiZLP4 JfDlf93Tn6gm310L80+vfeElXJC8Z6RTxMAJEpFEVQjX4txMzt1W487dzOrQ0SafDi/f JYvkJqqEW2mQ5VziWp4NwfPZwxUtUGbBq6Aq8klo/aXp+fV68MlXVoUuDk/7DcxXwJ64 9ht6uekaoWnX5YfLYgNqoBzORAj1MDhptJnSqjKB60C6zS+8nTBaPTThXo/0PTlYnD9Y dNiK6iW5Iql9BoLtn5KZdcrm/akumUx2DVJApSanBoGFzU257hmxA8+Aq1EC7fB0VmLm heUw== X-Gm-Message-State: ACrzQf29mucQuHnaU+kZJ70sK0JZsKOypf340NSuNTB/X5t/ikbHFSsu 1ZQpuUDuLi3LgwpMw67QVxOOqDPbe0Jp/g== X-Google-Smtp-Source: AMsMyM51WYOiFNo7EUlD+Ow/kV7EYt5hT1dzuuGXxsLrYodLqv5cdg/K/NUDN9h5as0hg/s0ufJ6uw== X-Received: by 2002:a63:5761:0:b0:46e:b96b:e76 with SMTP id h33-20020a635761000000b0046eb96b0e76mr39782931pgm.534.1667743548721; Sun, 06 Nov 2022 06:05:48 -0800 (PST) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d14-20020a170902654e00b00172f6726d8esm3156197pln.277.2022.11.06.06.05.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Nov 2022 06:05:47 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Vlastimil Babka , Naoya Horiguchi , Miaohe Lin , Matthew Wilcox , Minchan Kim , Mel Gorman , Andrea Arcangeli , Dan Williams , Hugh Dickins , Muchun Song , David Hildenbrand , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Steven Rostedt , Masami Hiramatsu , Andrey Konovalov , Marco Elver , Vasily Averin , NeilBrown Subject: [RFC v2 2/3] mm: introduce show_page_types() to provide human-readable page_type Date: Sun, 6 Nov 2022 23:03:54 +0900 Message-Id: <20221106140355.294845-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20221106140355.294845-1-42.hyeyoo@gmail.com> References: <20221106140355.294845-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=OlxHZfQA; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667743550; a=rsa-sha256; cv=none; b=gt3/f6xQLtogdRDPajoNtEsC7xGWuipYPR54zmSIRX4PcG05EyDmSDGz3YBCMaAi9ahgGs kn8ouYZEOhdBkY4ePvdh8BLZUxsGr1UPc91OJWno72f85eenURz74Cl1H60tfHkhh/j5gv gwjyL0VO9gAYXwLbpV3nD6MRsOpC2Zo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667743550; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t8vuXRR2q0PXI51qIeJ1ea3Gf25G2oUdKx9V5FzmI1s=; b=c/dTTaEMp+ba0VGVifxhjDTBy412NvmUXvpC7cJ+UrMGXVkf6ePUUvFuHpFAinFCpRUTTj 4+7FqKn+xDsryShnh+IUBXYzTgjHBiSEud+pWCi/G6xWryiJ5TPKPzmI8+MyLzP9hM+jeC B0iddKJPe6Jmm5qJCW/27miuj/1YdN0= X-Stat-Signature: dsuion7uixphmmz76apy17w8yu8izjdm X-Rspamd-Queue-Id: 4562AA000A Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=OlxHZfQA; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1667743550-837573 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some page flags are not actually set in 'flags' field. To provide better understanding of tracepoint output, introduce show_page_types() that shows page flags from page_type. Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Andrew Morton Cc: Andrey Konovalov Cc: Marco Elver Cc: Vasily Averin Cc: NeilBrown Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/mmflags.h | 12 ++++++++++++ include/trace/events/page_ref.h | 10 ++++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 72c11a16f771..a8dfb98a4dd6 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -136,6 +136,18 @@ IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") __def_pageflag_names \ ) : "none" +#define __def_pagetype_names \ + {PG_slab, "slab" }, \ + {PG_offline, "offline" }, \ + {PG_guard, "guard" }, \ + {PG_table, "table" }, \ + {PG_buddy, "buddy" } + +#define show_page_types(page_type) \ + page_type_has_type(page_type) ? \ + __print_flags((~page_type), "|", __def_pagetype_names) \ + : "none" + #if defined(CONFIG_X86) #define __VM_ARCH_SPECIFIC_1 {VM_PAT, "pat" } #elif defined(CONFIG_PPC) diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 8a99c1cd417b..b00d23e90e93 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -21,6 +21,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __field(unsigned long, flags) __field(int, count) __field(int, mapcount) + __field(unsigned int, page_type) __field(void *, mapping) __field(int, mt) __field(int, val) @@ -31,14 +32,16 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->flags = page->flags; __entry->count = page_ref_count(page); __entry->mapcount = page_mapcount(page); + __entry->page_type = page->page_type; __entry->mapping = page->mapping; __entry->mt = get_pageblock_migratetype(page); __entry->val = v; ), - TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d val=%d", + TP_printk("pfn=0x%lx flags=%s page_type=%s count=%d mapcount=%d mapping=%p mt=%d val=%d", __entry->pfn, show_page_flags(__entry->flags & PAGEFLAGS_MASK), + show_page_types(__entry->page_type & PAGE_TYPE_MASK), __entry->count, __entry->mapcount, __entry->mapping, __entry->mt, __entry->val) @@ -69,6 +72,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __field(unsigned long, flags) __field(int, count) __field(int, mapcount) + __field(unsigned int, page_type) __field(void *, mapping) __field(int, mt) __field(int, val) @@ -80,15 +84,17 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __entry->flags = page->flags; __entry->count = page_ref_count(page); __entry->mapcount = page_mapcount(page); + __entry->page_type = page->page_type; __entry->mapping = page->mapping; __entry->mt = get_pageblock_migratetype(page); __entry->val = v; __entry->ret = ret; ), - TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d val=%d ret=%d", + TP_printk("pfn=0x%lx flags=%s page_type=%s count=%d mapcount=%d mapping=%p mt=%d val=%d ret=%d", __entry->pfn, show_page_flags(__entry->flags & PAGEFLAGS_MASK), + show_page_types(__entry->page_type & PAGE_TYPE_MASK), __entry->count, __entry->mapcount, __entry->mapping, __entry->mt, __entry->val, __entry->ret) From patchwork Sun Nov 6 14:03:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 13033413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C345C433FE for ; Sun, 6 Nov 2022 14:05:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2ACB8E0007; Sun, 6 Nov 2022 09:05:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB5698E0001; Sun, 6 Nov 2022 09:05:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B552C8E0007; Sun, 6 Nov 2022 09:05:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9F36A8E0001 for ; Sun, 6 Nov 2022 09:05:57 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 71AE940C28 for ; Sun, 6 Nov 2022 14:05:57 +0000 (UTC) X-FDA: 80103191154.18.046D170 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf17.hostedemail.com (Postfix) with ESMTP id 14FA940002 for ; Sun, 6 Nov 2022 14:05:56 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id p12so3410496plq.4 for ; Sun, 06 Nov 2022 06:05:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=smCLlLyTQyDwgc2ypyGBDtflgmLDspTngdUNottfTWg=; b=mc6yIRmuumqY0HGaWV/CJrfs3m3/Eiu6jlpSdE3xAB1eptE8MGjjeNCnMwrmPecCwn siWV69WENJ7VgEVkCtFq7MZBq0YmLCJ+Xx4IYokNr+UznTb4tIOxrIjKn9C5gM76GVtU 3d1TP+IFqemIK4rJCAub6sTMoQC8ugbnc9MzLluN1FdaOd0/HfsAsjyAwAvvGRcJ5321 6wE64QZq8LxIZVwFXtNT3HEADjK92f/vVNXheffQaAk1lBaO0z/vy/W4lAYK9C5gAiDA 9JXJ7mvRQN66VKYcttbu1jTZGvWGl/fQW9sVXzJBMG8RykgM5cNdNcRogyhkXxFYa5Jr hfKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=smCLlLyTQyDwgc2ypyGBDtflgmLDspTngdUNottfTWg=; b=rwQd8lFWvA6mVwAKBxME7iNs1HuTAEhD5MjrImTfFFQYq/0mBiDd+ZvcBCeQ+ggqHp h05P7k7kqWXhjn0Mdyu1DBrl8ceGoK71p3I6R2pB+Dlx5hlbuZMeXVjRiylw+au9t04N PwM+7gebxxu2gesXd5dDrdjJ/AMEvX0gUpXFFLsWeyOCF3ZaTAdsEnHWAaObo9mPgwpZ a2FFunf99LtD8d7YuNewOOAlvTFgj3FABnGqIP1IUwf1zfqb7fs/IYzfEuOwuRdSMQM9 983DYuh7bFggrbRpf4zbjJGBpYOUJzSelFbyRGneV05fIOjT1J+hgwJNbNV2cpGy1Ujz n5Aw== X-Gm-Message-State: ACrzQf0fDxobR2o26sUCmH2dEtFbqBIRdBoiaKNnfrzxaJvLtIgt11if 1/uaryUC4lJt24h8tWAlMqEfVWR0hDzbIg== X-Google-Smtp-Source: AMsMyM7bpSnjkSlkuBLbex0k3sLCvvB6FqOdB99adAtFtItsRq1queNxOE9FQUp4MstN2Z7PA3hheA== X-Received: by 2002:a17:90a:f306:b0:213:b191:f3bf with SMTP id ca6-20020a17090af30600b00213b191f3bfmr44387333pjb.237.1667743555703; Sun, 06 Nov 2022 06:05:55 -0800 (PST) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d14-20020a170902654e00b00172f6726d8esm3156197pln.277.2022.11.06.06.05.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Nov 2022 06:05:54 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Vlastimil Babka , Naoya Horiguchi , Miaohe Lin , Matthew Wilcox , Minchan Kim , Mel Gorman , Andrea Arcangeli , Dan Williams , Hugh Dickins , Muchun Song , David Hildenbrand , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Petr Mladek , Steven Rostedt , Sergey Senozhatsky , Andy Shevchenko , Rasmus Villemoes , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC v2 3/3] mm, printk: introduce new format %pGt for page_type Date: Sun, 6 Nov 2022 23:03:55 +0900 Message-Id: <20221106140355.294845-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20221106140355.294845-1-42.hyeyoo@gmail.com> References: <20221106140355.294845-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mc6yIRmu; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667743557; a=rsa-sha256; cv=none; b=7t0kT3IlojJHPIOqoPb6UmiKmkGvXrd/S9+Y/KjR3nEI6lJk9LeNCjevQwjfWJhMWS82Vk b64HhUY4EpOw8DsZYXou5oTevLoRAQOrAx5s7L2KVqRqZEgUKLHNWAbdQcDMPKH5IYxsQH ZrBmWtWVKYx2Sd2C56JtMpj+hdsxa0E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667743557; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=smCLlLyTQyDwgc2ypyGBDtflgmLDspTngdUNottfTWg=; b=WZp3WQKLBSCG+MRD9ltAEEDIKFGHrwqzImUBoIVwjkVMKdpzRPQ2cMdZOwXKs75op7C/1V 5mWMEQd0wqGzsQqdyYjZIkEBP2Q6ubXD82/ozYsXMhFhXMGguCLCfwX0Iui7zvxKNzp21T o8tshRZxTE0ieLTUExAyvQbUeh8w+aY= X-Stat-Signature: u7kbi15n5bax5x9q9zs6g94qwocgf8en X-Rspamd-Queue-Id: 14FA940002 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mc6yIRmu; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1667743556-352251 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dump_page() uses %pGp format to print 'flags' field of struct page. As some page flags (e.g. PG_buddy, see page-flags.h for more details) are set in page_type field, introduce %pGt format which provides human readable output of page_type. And use it in dump_page(). Note that the sense of bits are different in page_type. if page_type is 0xffffffff, no flags are set. if PG_slab (0x00100000) flag is set, page_type is 0xffefffff. Clearing a bit means we set the bit. Bits in page_type are inverted when printing type names. Below is examples of dump_page(). One is just after alloc_pages() and the other is after __SetPageSlab(). [ 1.814728] page:ffffea000415e200 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105788 [ 1.815961] flags: 0x17ffffc0000000(node=0|zone=2|lastcpupid=0x1fffff) [ 1.816443] page_type: 0xffffffff() [ 1.816704] raw: 0017ffffc0000000 0000000000000000 dead000000000122 0000000000000000 [ 1.817291] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000 [ 1.817870] page dumped because: Before __SetPageSlab() [ 1.818258] page:ffffea000415e200 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105788 [ 1.818857] flags: 0x17ffffc0000000(node=0|zone=2|lastcpupid=0x1fffff) [ 1.819250] page_type: 0xffefffff(slab) [ 1.819483] raw: 0017ffffc0000000 0000000000000000 dead000000000122 0000000000000000 [ 1.819947] raw: 0000000000000000 0000000000000000 00000001ffefffff 0000000000000000 [ 1.820410] page dumped because: After __SetPageSlab() Cc: Petr Mladek Cc: Steven Rostedt Cc: Sergey Senozhatsky Cc: Andy Shevchenko Cc: Rasmus Villemoes Cc: Jonathan Corbet Cc: Andrew Morton Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- Documentation/core-api/printk-formats.rst | 3 ++- lib/test_printf.c | 23 ++++++++++++++++++++++ lib/vsprintf.c | 24 +++++++++++++++++++++++ mm/debug.c | 7 +++++++ mm/internal.h | 1 + 5 files changed, 57 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst index dbe1aacc79d0..582e965508eb 100644 --- a/Documentation/core-api/printk-formats.rst +++ b/Documentation/core-api/printk-formats.rst @@ -575,12 +575,13 @@ The field width is passed by value, the bitmap is passed by reference. Helper macros cpumask_pr_args() and nodemask_pr_args() are available to ease printing cpumask and nodemask. -Flags bitfields such as page flags, gfp_flags +Flags bitfields such as page flags, page_type, gfp_flags --------------------------------------------- :: %pGp 0x17ffffc0002036(referenced|uptodate|lru|active|private|node=0|zone=2|lastcpupid=0x1fffff) + %pGt 0xffefffff(slab) %pGg GFP_USER|GFP_DMA32|GFP_NOWARN %pGv read|exec|mayread|maywrite|mayexec|denywrite diff --git a/lib/test_printf.c b/lib/test_printf.c index fe13de1bed5f..6b778a8ea44c 100644 --- a/lib/test_printf.c +++ b/lib/test_printf.c @@ -654,12 +654,26 @@ page_flags_test(int section, int node, int zone, int last_cpupid, test(cmp_buf, "%pGp", &flags); } +static void __init page_type_test(unsigned int page_type, const char *name, + char *cmp_buf) +{ + unsigned long size; + + size = scnprintf(cmp_buf, BUF_SIZE, "%#x(", page_type); + if (page_type_has_type(page_type)) + size += scnprintf(cmp_buf + size, BUF_SIZE - size, "%s", name); + + snprintf(cmp_buf + size, BUF_SIZE - size, ")"); + test(cmp_buf, "%pGt", &page_type); +} + static void __init flags(void) { unsigned long flags; char *cmp_buffer; gfp_t gfp; + unsigned int page_type; cmp_buffer = kmalloc(BUF_SIZE, GFP_KERNEL); if (!cmp_buffer) @@ -699,6 +713,15 @@ flags(void) gfp |= __GFP_HIGH; test(cmp_buffer, "%pGg", &gfp); + page_type = ~0; + page_type_test(page_type, "", cmp_buffer); + + page_type = ~PG_slab; + page_type_test(page_type, "slab", cmp_buffer); + + page_type = ~(PG_slab | PG_table | PG_buddy); + page_type_test(page_type, "slab|table|buddy", cmp_buffer); + kfree(cmp_buffer); } diff --git a/lib/vsprintf.c b/lib/vsprintf.c index 24f37bab8bc1..d855b40e5cfd 100644 --- a/lib/vsprintf.c +++ b/lib/vsprintf.c @@ -2056,6 +2056,28 @@ char *format_page_flags(char *buf, char *end, unsigned long flags) return buf; } +static +char *format_page_type(char *buf, char *end, unsigned int page_type) +{ + if (!(page_type & PAGE_TYPE_BASE)) + return string(buf, end, "no type for user-mapped page", default_str_spec); + + buf = number(buf, end, page_type, default_flag_spec); + + if (buf < end) + *buf = '('; + buf++; + + if (page_type_has_type(page_type)) + buf = format_flags(buf, end, ~page_type, pagetype_names); + + if (buf < end) + *buf = ')'; + buf++; + + return buf; +} + static noinline_for_stack char *flags_string(char *buf, char *end, void *flags_ptr, struct printf_spec spec, const char *fmt) @@ -2069,6 +2091,8 @@ char *flags_string(char *buf, char *end, void *flags_ptr, switch (fmt[1]) { case 'p': return format_page_flags(buf, end, *(unsigned long *)flags_ptr); + case 't': + return format_page_type(buf, end, *(unsigned int *)flags_ptr); case 'v': flags = *(unsigned long *)flags_ptr; names = vmaflag_names; diff --git a/mm/debug.c b/mm/debug.c index 0fd15ba70d16..bb7f2278abc5 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -36,6 +36,11 @@ const struct trace_print_flags pageflag_names[] = { {0, NULL} }; +const struct trace_print_flags pagetype_names[] = { + __def_pagetype_names, + {0, NULL} +}; + const struct trace_print_flags gfpflag_names[] = { __def_gfpflag_names, {0, NULL} @@ -114,6 +119,8 @@ static void __dump_page(struct page *page) pr_warn("%sflags: %pGp%s\n", type, &head->flags, page_cma ? " CMA" : ""); + pr_warn("page_type: %pGt\n", &head->page_type); + print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32, sizeof(unsigned long), page, sizeof(struct page), false); diff --git a/mm/internal.h b/mm/internal.h index cb4c663a714e..956eaa9f12c0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -773,6 +773,7 @@ static inline void flush_tlb_batched_pending(struct mm_struct *mm) #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ extern const struct trace_print_flags pageflag_names[]; +extern const struct trace_print_flags pagetype_names[]; extern const struct trace_print_flags vmaflag_names[]; extern const struct trace_print_flags gfpflag_names[];