From patchwork Fri May 18 19:45:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10411989 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 297C46031B for ; Fri, 18 May 2018 19:45:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17F0320499 for ; Fri, 18 May 2018 19:45:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0C2DD28A4F; Fri, 18 May 2018 19:45:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4627920499 for ; Fri, 18 May 2018 19:45:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A00F6B066D; Fri, 18 May 2018 15:45:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 723936B066F; Fri, 18 May 2018 15:45:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52AC46B066D; Fri, 18 May 2018 15:45:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f70.google.com (mail-pl0-f70.google.com [209.85.160.70]) by kanga.kvack.org (Postfix) with ESMTP id E257D6B0667 for ; Fri, 18 May 2018 15:45:31 -0400 (EDT) Received: by mail-pl0-f70.google.com with SMTP id u7-v6so5665013plq.3 for ; Fri, 18 May 2018 12:45:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=0Q61ftop78LnOIKzXb8fzsfed9tyQH/MhCfa0QHo9iM=; b=t8C/eLjUymbjlDnStb160LXilCrJ4gVmF1GRwXr8LW0jT6ui+4Vfqx6xaJQx6DQBYe MM3n8B5Oa3UZHOkYWGDTM3Uw/z7tH//tf1mDECfOc4UjsBocgBGIIsuG52qv8je+pRgo jgihVeTBSDVffPiuXoY8WIX4Y+7kWiVL7FhGQCSCjg5X6ARfghEpYvhntCSWkgxlW4lq Hvixer6PxGp64aqL/CQH+d3iG/nXUL5RZbaRwAB1OMnEa1vU9TIrOuJq2HXdFJp95p9h JLS7FL7R4oRI+x+bgqEG1XTjq5jvOEY20kdt2PYSEtq2CPacAlc0aULvVKqNTbLum30H 0h/g== X-Gm-Message-State: ALKqPwelwOvP9kZFbdYd3jDUmrJeQbEt7wkNwBvM8vzrjD/wd0GOgVaF k2cmji0gdZBkTRR1yMqpLkTb4CtnQAVEoa3p9rSwDGKXCndffRoRDer5LVTFWMwUrcsKtXARqiV /y7b7Wxi5K7+ratMjAnE9FG9MOw9jgbuxCFSiCsrhX8JuXevSi694JUXXv8Cj4i756Q== X-Received: by 2002:a62:d6da:: with SMTP id a87-v6mr10724667pfl.200.1526672731553; Fri, 18 May 2018 12:45:31 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpqliLu3CxgBGqlAsIrgeHVK8ppuDSASQYY6FLZlc21xGaizb5M8p3tGwy/HIfxe+HdPSdB X-Received: by 2002:a62:d6da:: with SMTP id a87-v6mr10724374pfl.200.1526672723525; Fri, 18 May 2018 12:45:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526672723; cv=none; d=google.com; s=arc-20160816; b=Ev8txEH4Kj0rl86FTyKXQH6VSTbqVT4HXCx1Q/wWLQGz1C3u3eNx3bjK9JU3DkYrBs F7pBNHnHK3+90ijhPZWGFBCaQ1iv4oH0JzKqxMbBTEElycJNtpJJzBhtLYgZOQVdkw7+ XmUXtQtfee/W63JQJlwUE9+i1zqEr1Kz2mg9IzniW88zE4ZH8Vhh72H10FtiydAWin1F fYw719Y808Vq8f0eQ6AMD3+3Hb+wV7+ISfoqNidGdbivFRbovSzzQrfJJOI3VZPnSq5h +UCzZ3m4Ynx34ga5Khm9gPnlR53UbpT7rl6rhSi1zDMTW/+3yA1CoUhwwsqbViAWmfqj fznw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=0Q61ftop78LnOIKzXb8fzsfed9tyQH/MhCfa0QHo9iM=; b=h6DV5BZY+Kxgm0lhdNVKHijQENCbulOse4xcb2WkQiS4tle2hadPVm86NFgE4yid1+ FrUNAcglY0/YX8RJUD8NRU5DNnbzYvLmbUUUHTn+Qx2/2BoC3KgvIHqaNiRHqFgJLYQq DYlYhmRNg/JU1wYEHUyQ4F162DB/3Dd9FfQuBerb7AMJEBriTGMDU+Rr+IBQ5tqOhJ0P VfS8U7ZuQD5oHjB29ow7Xb99k+6iIfNnCa9Cyda/17bgZTmm7QRta6nTVkqsO1uRgiEY 8YVFwUioC18fEjxv2k3KMKPz8hVFfYDSm1uk80UPCiLwTYhuTI6MuW97iABQahd/d1+y Vzww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=E0GTPe8q; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id z10-v6si6277517pgz.443.2018.05.18.12.45.23 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 18 May 2018 12:45:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=E0GTPe8q; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0Q61ftop78LnOIKzXb8fzsfed9tyQH/MhCfa0QHo9iM=; b=E0GTPe8qzeuyynJVZy816xsrm WXirDdQL4GFy6GBcnSYzLhAxCQfTaAnjuUQJv63Ba6/PTXai78kk5gLMDhdx4Gi9r5KVDBUHP4DXb Ed9zw9tlUX98zEOLxMZgzt91UXpFcXf1a7Y3hnMuJK0qIsu/kacuzgH2sFylx9RB8vE5uAj/j49Ad q1TRAvOfOi88xVSJ9oORhQbh+gT4F5SVNZChhxjd3s9aB0fBYVtPvkhW+duDF1djsmyRab4N6ex14 jhZ9LP1Fvk8/T+E3uenqMpP/hDRvHYI2AVTYNIrou7eq8mRs7s15adTARILSNpe4UFtUa/0Gfb7Us zCT/iCnyw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fJlJa-00010b-0M; Fri, 18 May 2018 19:45:22 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v6 02/17] mm: Split page_type out from _mapcount Date: Fri, 18 May 2018 12:45:04 -0700 Message-Id: <20180518194519.3820-3-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180518194519.3820-1-willy@infradead.org> References: <20180518194519.3820-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox We're already using a union of many fields here, so stop abusing the _mapcount and make page_type its own field. That implies renaming some of the machinery that creates PageBuddy, PageBalloon and PageKmemcg; bring back the PG_buddy, PG_balloon and PG_kmemcg names. As suggested by Kirill, make page_type a bitmask. Because it starts out life as -1 (thanks to sharing the storage with _mapcount), setting a page flag means clearing the appropriate bit. This gives us space for probably twenty or so extra bits (depending how paranoid we want to be about _mapcount underflow). Signed-off-by: Matthew Wilcox Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka --- include/linux/mm_types.h | 13 ++++++----- include/linux/page-flags.h | 45 ++++++++++++++++++++++---------------- kernel/crash_core.c | 1 + mm/page_alloc.c | 13 +++++------ scripts/tags.sh | 6 ++--- 5 files changed, 43 insertions(+), 35 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 21612347d311..41828fb34860 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -96,6 +96,14 @@ struct page { }; union { + /* + * If the page is neither PageSlab nor mappable to userspace, + * the value stored here may help determine what this page + * is used for. See page-flags.h for a list of page types + * which are currently stored here. + */ + unsigned int page_type; + _slub_counter_t counters; unsigned int active; /* SLAB */ struct { /* SLUB */ @@ -109,11 +117,6 @@ struct page { /* * Count of ptes mapped in mms, to show when * page is mapped & limit reverse map searches. - * - * Extra information about page type may be - * stored here for pages that are never mapped, - * in which case the value MUST BE <= -2. - * See page-flags.h for more details. */ atomic_t _mapcount; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index e34a27727b9a..8c25b28a35aa 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -642,49 +642,56 @@ PAGEFLAG_FALSE(DoubleMap) #endif /* - * For pages that are never mapped to userspace, page->mapcount may be - * used for storing extra information about page type. Any value used - * for this purpose must be <= -2, but it's better start not too close - * to -2 so that an underflow of the page_mapcount() won't be mistaken - * for a special page. + * For pages that are never mapped to userspace (and aren't PageSlab), + * page_type may be used. Because it is initialised to -1, we invert the + * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and + * __ClearPageFoo *sets* the bit used for PageFoo. We reserve a few high and + * low bits so that an underflow or overflow of page_mapcount() won't be + * mistaken for a page type value. */ -#define PAGE_MAPCOUNT_OPS(uname, lname) \ + +#define PAGE_TYPE_BASE 0xf0000000 +/* Reserve 0x0000007f to catch underflows of page_mapcount */ +#define PG_buddy 0x00000080 +#define PG_balloon 0x00000100 +#define PG_kmemcg 0x00000200 + +#define PageType(page, flag) \ + ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) + +#define PAGE_TYPE_OPS(uname, lname) \ static __always_inline int Page##uname(struct page *page) \ { \ - return atomic_read(&page->_mapcount) == \ - PAGE_##lname##_MAPCOUNT_VALUE; \ + return PageType(page, PG_##lname); \ } \ static __always_inline void __SetPage##uname(struct page *page) \ { \ - VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page); \ - atomic_set(&page->_mapcount, PAGE_##lname##_MAPCOUNT_VALUE); \ + VM_BUG_ON_PAGE(!PageType(page, 0), page); \ + page->page_type &= ~PG_##lname; \ } \ static __always_inline void __ClearPage##uname(struct page *page) \ { \ VM_BUG_ON_PAGE(!Page##uname(page), page); \ - atomic_set(&page->_mapcount, -1); \ + page->page_type |= PG_##lname; \ } /* - * PageBuddy() indicate that the page is free and in the buddy system + * PageBuddy() indicates that the page is free and in the buddy system * (see mm/page_alloc.c). */ -#define PAGE_BUDDY_MAPCOUNT_VALUE (-128) -PAGE_MAPCOUNT_OPS(Buddy, BUDDY) +PAGE_TYPE_OPS(Buddy, buddy) /* - * PageBalloon() is set on pages that are on the balloon page list + * PageBalloon() is true for pages that are on the balloon page list * (see mm/balloon_compaction.c). */ -#define PAGE_BALLOON_MAPCOUNT_VALUE (-256) -PAGE_MAPCOUNT_OPS(Balloon, BALLOON) +PAGE_TYPE_OPS(Balloon, balloon) /* * If kmemcg is enabled, the buddy allocator will set PageKmemcg() on * pages allocated with __GFP_ACCOUNT. It gets cleared on page free. */ -#define PAGE_KMEMCG_MAPCOUNT_VALUE (-512) -PAGE_MAPCOUNT_OPS(Kmemcg, KMEMCG) +PAGE_TYPE_OPS(Kmemcg, kmemcg) extern bool is_free_buddy_page(struct page *page); diff --git a/kernel/crash_core.c b/kernel/crash_core.c index f7674d676889..b66aced5e8c2 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -460,6 +460,7 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_NUMBER(PG_hwpoison); #endif VMCOREINFO_NUMBER(PG_head_mask); +#define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy) VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE); #ifdef CONFIG_HUGETLB_PAGE VMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5ee6256e31d0..da3eb2236ba1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -686,16 +686,14 @@ static inline void rmv_page_order(struct page *page) /* * This function checks whether a page is free && is the buddy - * we can do coalesce a page and its buddy if + * we can coalesce a page and its buddy if * (a) the buddy is not in a hole (check before calling!) && * (b) the buddy is in the buddy system && * (c) a page and its buddy have the same order && * (d) a page and its buddy are in the same zone. * - * For recording whether a page is in the buddy system, we set ->_mapcount - * PAGE_BUDDY_MAPCOUNT_VALUE. - * Setting, clearing, and testing _mapcount PAGE_BUDDY_MAPCOUNT_VALUE is - * serialized by zone->lock. + * For recording whether a page is in the buddy system, we set PageBuddy. + * Setting, clearing, and testing PageBuddy is serialized by zone->lock. * * For recording page's order, we use page_private(page). */ @@ -740,9 +738,8 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, * as necessary, plus some accounting needed to play nicely with other * parts of the VM system. * At each level, we keep a list of pages, which are heads of continuous - * free pages of length of (1 << order) and marked with _mapcount - * PAGE_BUDDY_MAPCOUNT_VALUE. Page's order is recorded in page_private(page) - * field. + * free pages of length of (1 << order) and marked with PageBuddy. + * Page's order is recorded in page_private(page) field. * So when we are allocating or freeing one, we can derive the state of the * other. That is, if we allocate a small block, and both were * free, the remainder of the region must be split into blocks. diff --git a/scripts/tags.sh b/scripts/tags.sh index 78e546ff689c..8c3ae36d4ea8 100755 --- a/scripts/tags.sh +++ b/scripts/tags.sh @@ -188,9 +188,9 @@ regex_c=( '/\