From patchwork Wed May 23 16:38:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huaisheng Ye X-Patchwork-Id: 10421909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D19EF6032A for ; Wed, 23 May 2018 16:38:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C1AFE289D3 for ; Wed, 23 May 2018 16:38:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B622E28E71; Wed, 23 May 2018 16:38:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C297A28DA4 for ; Wed, 23 May 2018 16:38:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 893196B0005; Wed, 23 May 2018 12:38:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 81C536B0007; Wed, 23 May 2018 12:38:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BE3C6B0008; Wed, 23 May 2018 12:38:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 297AC6B0005 for ; Wed, 23 May 2018 12:38:32 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id w7-v6so13455138pfd.9 for ; Wed, 23 May 2018 09:38:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id; bh=rwgMJP14wb0uFoS0zdJwwHXsIOYPahnbwRwtIOhhtFI=; b=E5VgbQiiG8i253/cbTGs1OFBGua/WOj7mYb/6ScxfN59ZQQyX++YXObo+HWPQgugEc FTMlKtuOfW7t+1ggfAwRATZv4ct8FZI0zLAgra6TUpfqwKLoPD5jOZ5ekmmZ42zs9/qg oKheerPPCo0U5nlzgr9x8sjJMeDtDmDvq9ffqT4DdnraMjh3M1r3dFXpMgCz6WoNVdhk ylQONaes9FJMAeYn5h2+iwb78mSf5pyKuUsIZ8kd5Qx2Bk/k5tcO1DTCBT8RX+xo+7mW xd3kfneZDW6c0h32unO9V2tsQ3ACdEe3ZMsbreoC16JotExLKcBqEJomoHNgeIqwp7TM 4SFw== X-Gm-Message-State: ALKqPwcb4Jho9WY7udtJAlcK5/RLHdQpzxFt5KTFzCpkNXKI27be/GwF COEWYUXHaHyRPtlpyh13KMGUsnTbP8Lr0mGof/EmzRvFz2RqQrm3Eq5MW49AkqVad8jCGaz9Qjl 6DzYXA08GdJIYpjqGynGrWLY3DtYNfA/KZTVoPcMXJSnxt3VNvW6/qK1srJAJCrh4tCMsicotm+ yXH4p8EElNGHaDYfsKcZRgJpbnhpMvlr9iPZSRkXRkHSwfH/rcnAs0lpEbsM3AbTyHrIcpgEZzZ 5pL9u039v6OYbyPxZUznqPmN2VZ5PqXslHRstJ4HLSjs4cSsujMp6UoqlRI2tRJM1ZflpOB54TE l2vUtp4DLrdHKzOzHo6N/CikPn6Ve0iE8HcLzEuNUYUL4Jh8sVvPnYLE6iCEm7g//px6cD/T+dG 6 X-Received: by 2002:a65:5686:: with SMTP id v6-v6mr2839152pgs.92.1527093511812; Wed, 23 May 2018 09:38:31 -0700 (PDT) X-Received: by 2002:a65:5686:: with SMTP id v6-v6mr2839102pgs.92.1527093510528; Wed, 23 May 2018 09:38:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527093510; cv=none; d=google.com; s=arc-20160816; b=iktq1yNd4Eho8bfy+erQ1cK08C74yqB330yxkUzWzV8SNdtBB7mSlMcJNwjtZMG/D9 XkzOEIP32CemferplGgE8LUy/B02pRGO58gkpyC6iLHiKVuqH9M1v9Pj7ZrMb7CSouIr kbp/Q3tEq9I1a13zeJ594LyfPdSXbi3HQaAxAF8/7K18QUaY3yiAm//k13ASiKFIa7L5 uSdd8Y/HgodMj6Lsm85Y9BOnNk9wZX4eaeu/Sm+rQIDPKOy16vC+CwBGKR9fAE67xpxk g2c+jEtyeyBzB4h9hBTZd2usHWfaWXZIkW4VnmS27/omeaeWIh8KOFNMcC91ByLrqudD 5wNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=rwgMJP14wb0uFoS0zdJwwHXsIOYPahnbwRwtIOhhtFI=; b=lXNZeiTLBJyeBOKHv0JquZa65kpA+XfKaRTuGQuz4r0Z1k9JmVpsTbcxjixyedAyx8 47VfsnjzFrYF+kMe08d0AKeebfVrmoMVITXk9EDzp6VxMuhIwq0Ottd0z20MkVgEtmTn qebuZUSELgUCS2ARy3IymhGOLr3nvd7Ra2cIqHbSSWm3wiQxWDSJdBSAnTv0R5trwl+A dpuUJQvmBJogBwKB7e6iLwicDiqILm74IZAxf8sVZPqAMXydlf7LmzITA2pks5P7g6IB GV4YDyC4yLlJjvB887oBrD6cujVJmMUrtZHyPAlHlQ4Ir5qdGJerdDpDlB3nEpKfvUqo dbhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=VhcavFsq; spf=pass (google.com: domain of yehs2007@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=yehs2007@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id s9-v6sor8278087plr.104.2018.05.23.09.38.30 for (Google Transport Security); Wed, 23 May 2018 09:38:30 -0700 (PDT) Received-SPF: pass (google.com: domain of yehs2007@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=VhcavFsq; spf=pass (google.com: domain of yehs2007@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=yehs2007@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=rwgMJP14wb0uFoS0zdJwwHXsIOYPahnbwRwtIOhhtFI=; b=VhcavFsqW6yRexaTsbln6EonJ6Ng+a1YlOFn/q5UG0SmmVEdVDuC0HLlFpaX8O5cRf zk12vQfG1wrN5vujm4Kc4NKu7pYD2SngMA5Vne5/ClC0wjmM4AKnkFexPOg0XGtuDkYb loOPrSahY33XdMYcYQBv6+9uv1cW1ckRDqjJ10vrLv5O8OnYqby/Bk+VsrlFpHl5yWgt Fgkr4Y1ClagVK2ARmyK8oYl46WBjSg/0cnE0mDkcbGOMg8pA2vn8DVh/OlwlbxP/npgr EPkGwBzFwPlPhSl0XPT5BMqnzRhmS4NPrmKlZ14F+6/2n6gmqBmqXW1SzvFQx7DRREp6 nTpA== X-Google-Smtp-Source: AB8JxZrPdl+Qgsx53geaR729cZo8UYw0vu0GyhzWZnWU8mbvBCZdxhUKhr5eVW4yqboTn+hvMKsimg== X-Received: by 2002:a17:902:bb07:: with SMTP id l7-v6mr3725040pls.128.1527093510249; Wed, 23 May 2018 09:38:30 -0700 (PDT) Received: from localhost.localdomain ([123.120.56.60]) by smtp.gmail.com with ESMTPSA id p6-v6sm30697283pfn.181.2018.05.23.09.38.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 May 2018 09:38:29 -0700 (PDT) From: Huaisheng Ye To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: mhocko@suse.com, willy@infradead.org, hch@lst.de, vbabka@suse.cz, mgorman@techsingularity.net, kstewart@linuxfoundation.org, gregkh@linuxfoundation.org, colyli@suse.de, chengnt@lenovo.com, hehy1@lenovo.com, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-btrfs@vger.kernel.org, Huaisheng Ye , "Levin, Alexander (Sasha Levin)" , Christoph Hellwig Subject: [RFC PATCH v3 1/9] include/linux/gfp.h: get rid of GFP_ZONE_TABLE/BAD Date: Thu, 24 May 2018 00:38:22 +0800 Message-Id: <1527093502-3950-1-git-send-email-yehs2007@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Huaisheng Ye Replace GFP_ZONE_TABLE and GFP_ZONE_BAD with encoded zone number. Delete ___GFP_DMA, ___GFP_HIGHMEM and ___GFP_DMA32 from GFP bitmasks, the bottom three bits of GFP mask is reserved for storing encoded zone number. The encoding method is XOR. Get zone number from enum zone_type, then encode the number with ZONE_NORMAL by XOR operation. The goal is to make sure ZONE_NORMAL can be encoded to zero. So, the compatibility can be guaranteed, such as GFP_KERNEL and GFP_ATOMIC can be used as before. Reserve __GFP_MOVABLE in bit 3, so that it can continue to be used as a flag. Same as before, __GFP_MOVABLE respresents movable migrate type for ZONE_DMA, ZONE_DMA32, and ZONE_NORMAL. But when it is enabled with __GFP_HIGHMEM, ZONE_MOVABLE shall be returned instead of ZONE_HIGHMEM. __GFP_ZONE_MOVABLE is created to realize it. With this patch, just enabling __GFP_MOVABLE and __GFP_HIGHMEM is not enough to get ZONE_MOVABLE from gfp_zone. All subsystems should use GFP_HIGHUSER_MOVABLE directly to achieve that. Decode zone number directly from bottom three bits of flags in gfp_zone. The theory of encoding and decoding is, A ^ B ^ B = A Suggested-by: Matthew Wilcox Signed-off-by: Huaisheng Ye Cc: Andrew Morton Cc: Vlastimil Babka Cc: Michal Hocko Cc: Mel Gorman Cc: Kate Stewart Cc: "Levin, Alexander (Sasha Levin)" Cc: Greg Kroah-Hartman Cc: Christoph Hellwig --- include/linux/gfp.h | 107 ++++++++++------------------------------------------ 1 file changed, 20 insertions(+), 87 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 1a4582b..f76ccd76 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -16,9 +16,7 @@ */ /* Plain integer GFP bitmasks. Do not use this directly. */ -#define ___GFP_DMA 0x01u -#define ___GFP_HIGHMEM 0x02u -#define ___GFP_DMA32 0x04u +#define ___GFP_ZONE_MASK 0x07u #define ___GFP_MOVABLE 0x08u #define ___GFP_RECLAIMABLE 0x10u #define ___GFP_HIGH 0x20u @@ -53,11 +51,15 @@ * without the underscores and use them consistently. The definitions here may * be used in bit comparisons. */ -#define __GFP_DMA ((__force gfp_t)___GFP_DMA) -#define __GFP_HIGHMEM ((__force gfp_t)___GFP_HIGHMEM) -#define __GFP_DMA32 ((__force gfp_t)___GFP_DMA32) +#define __GFP_DMA ((__force gfp_t)OPT_ZONE_DMA ^ ZONE_NORMAL) +#define __GFP_HIGHMEM ((__force gfp_t)OPT_ZONE_HIGHMEM ^ ZONE_NORMAL) +#define __GFP_DMA32 ((__force gfp_t)OPT_ZONE_DMA32 ^ ZONE_NORMAL) #define __GFP_MOVABLE ((__force gfp_t)___GFP_MOVABLE) /* ZONE_MOVABLE allowed */ -#define GFP_ZONEMASK (__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE) +#define GFP_ZONEMASK ((__force gfp_t)___GFP_ZONE_MASK | ___GFP_MOVABLE) +/* bottom 3 bits of GFP bitmasks are used for zone number encoded*/ +#define __GFP_ZONE_MASK ((__force gfp_t)___GFP_ZONE_MASK) +#define __GFP_ZONE_MOVABLE \ + ((__force gfp_t)(ZONE_MOVABLE ^ ZONE_NORMAL) | ___GFP_MOVABLE) /* * Page mobility and placement hints @@ -268,6 +270,13 @@ * available and will not wake kswapd/kcompactd on failure. The _LIGHT * version does not attempt reclaim/compaction at all and is by default used * in page fault path, while the non-light is used by khugepaged. + * + * GFP_NORMAL() is used to clear bottom 3 bits of GFP bitmask. Actually it + * returns encoded ZONE_NORMAL bits. + * + * GFP_NORMAL_UNMOVABLE() is similar to GFP_NORMAL, but it clear bottom 4 bits + * of GFP bitmask. Excepting the encoded ZONE_NORMAL bits, it clears MOVABLE + * flags as well. */ #define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM) #define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS) @@ -279,10 +288,12 @@ #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) -#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE) +#define GFP_HIGHUSER_MOVABLE (GFP_USER | __GFP_ZONE_MOVABLE) #define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \ __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM) #define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM) +#define GFP_NORMAL(gfp) ((gfp) & ~__GFP_ZONE_MASK) +#define GFP_NORMAL_UNMOVABLE(gfp) ((gfp) & ~GFP_ZONEMASK) /* Convert GFP flags to their corresponding migrate type */ #define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE) @@ -326,87 +337,9 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) #define OPT_ZONE_DMA32 ZONE_NORMAL #endif -/* - * GFP_ZONE_TABLE is a word size bitstring that is used for looking up the - * zone to use given the lowest 4 bits of gfp_t. Entries are GFP_ZONES_SHIFT - * bits long and there are 16 of them to cover all possible combinations of - * __GFP_DMA, __GFP_DMA32, __GFP_MOVABLE and __GFP_HIGHMEM. - * - * The zone fallback order is MOVABLE=>HIGHMEM=>NORMAL=>DMA32=>DMA. - * But GFP_MOVABLE is not only a zone specifier but also an allocation - * policy. Therefore __GFP_MOVABLE plus another zone selector is valid. - * Only 1 bit of the lowest 3 bits (DMA,DMA32,HIGHMEM) can be set to "1". - * - * bit result - * ================= - * 0x0 => NORMAL - * 0x1 => DMA or NORMAL - * 0x2 => HIGHMEM or NORMAL - * 0x3 => BAD (DMA+HIGHMEM) - * 0x4 => DMA32 or DMA or NORMAL - * 0x5 => BAD (DMA+DMA32) - * 0x6 => BAD (HIGHMEM+DMA32) - * 0x7 => BAD (HIGHMEM+DMA32+DMA) - * 0x8 => NORMAL (MOVABLE+0) - * 0x9 => DMA or NORMAL (MOVABLE+DMA) - * 0xa => MOVABLE (Movable is valid only if HIGHMEM is set too) - * 0xb => BAD (MOVABLE+HIGHMEM+DMA) - * 0xc => DMA32 (MOVABLE+DMA32) - * 0xd => BAD (MOVABLE+DMA32+DMA) - * 0xe => BAD (MOVABLE+DMA32+HIGHMEM) - * 0xf => BAD (MOVABLE+DMA32+HIGHMEM+DMA) - * - * GFP_ZONES_SHIFT must be <= 2 on 32 bit platforms. - */ - -#if defined(CONFIG_ZONE_DEVICE) && (MAX_NR_ZONES-1) <= 4 -/* ZONE_DEVICE is not a valid GFP zone specifier */ -#define GFP_ZONES_SHIFT 2 -#else -#define GFP_ZONES_SHIFT ZONES_SHIFT -#endif - -#if 16 * GFP_ZONES_SHIFT > BITS_PER_LONG -#error GFP_ZONES_SHIFT too large to create GFP_ZONE_TABLE integer -#endif - -#define GFP_ZONE_TABLE ( \ - (ZONE_NORMAL << 0 * GFP_ZONES_SHIFT) \ - | (OPT_ZONE_DMA << ___GFP_DMA * GFP_ZONES_SHIFT) \ - | (OPT_ZONE_HIGHMEM << ___GFP_HIGHMEM * GFP_ZONES_SHIFT) \ - | (OPT_ZONE_DMA32 << ___GFP_DMA32 * GFP_ZONES_SHIFT) \ - | (ZONE_NORMAL << ___GFP_MOVABLE * GFP_ZONES_SHIFT) \ - | (OPT_ZONE_DMA << (___GFP_MOVABLE | ___GFP_DMA) * GFP_ZONES_SHIFT) \ - | (ZONE_MOVABLE << (___GFP_MOVABLE | ___GFP_HIGHMEM) * GFP_ZONES_SHIFT)\ - | (OPT_ZONE_DMA32 << (___GFP_MOVABLE | ___GFP_DMA32) * GFP_ZONES_SHIFT)\ -) - -/* - * GFP_ZONE_BAD is a bitmap for all combinations of __GFP_DMA, __GFP_DMA32 - * __GFP_HIGHMEM and __GFP_MOVABLE that are not permitted. One flag per - * entry starting with bit 0. Bit is set if the combination is not - * allowed. - */ -#define GFP_ZONE_BAD ( \ - 1 << (___GFP_DMA | ___GFP_HIGHMEM) \ - | 1 << (___GFP_DMA | ___GFP_DMA32) \ - | 1 << (___GFP_DMA32 | ___GFP_HIGHMEM) \ - | 1 << (___GFP_DMA | ___GFP_DMA32 | ___GFP_HIGHMEM) \ - | 1 << (___GFP_MOVABLE | ___GFP_HIGHMEM | ___GFP_DMA) \ - | 1 << (___GFP_MOVABLE | ___GFP_DMA32 | ___GFP_DMA) \ - | 1 << (___GFP_MOVABLE | ___GFP_DMA32 | ___GFP_HIGHMEM) \ - | 1 << (___GFP_MOVABLE | ___GFP_DMA32 | ___GFP_DMA | ___GFP_HIGHMEM) \ -) - static inline enum zone_type gfp_zone(gfp_t flags) { - enum zone_type z; - int bit = (__force int) (flags & GFP_ZONEMASK); - - z = (GFP_ZONE_TABLE >> (bit * GFP_ZONES_SHIFT)) & - ((1 << GFP_ZONES_SHIFT) - 1); - VM_BUG_ON((GFP_ZONE_BAD >> bit) & 1); - return z; + return ((__force unsigned int)flags & __GFP_ZONE_MASK) ^ ZONE_NORMAL; } /*