From patchwork Wed Jul 6 17:42:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B430C433EF for ; Wed, 6 Jul 2022 17:44:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4VcEpusHUBGOLZGOaueIC6337j7BadsJb3dpmM1xUv8=; b=JQ3fUxO5nVg4Ze 9NhYzcUq3ARJ+PjlZ5fnuS0l7o1PAGZZqkOpaeuBB64v4qOKgnrxDyv8FhSI7dxRSdKl4eT71U50f +Efkrf5L2/lxv5YAGXFBYFoKuxfQq5S9CwQ1s4sOnoo/Ifupws6RC0WuWWXeXxQ7sjXuoSLA10FIV fVuiwlXmDJm0FN8K8WHxcqtlBZFsNgKtpFHXNEzsdbjFnvpXIykBzDftEASoCA9+isKeuGw+d8MQt Vt8XxqJjFSNwKhzV/vTWlJ3BwWy6yGsdNv2OUq0t8XYaib5oF5gisuc2bCtKnsq6KaKEp9vF9ya54 lDZt4+ZEYB8JU0x8qplA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993Z-00Bm4U-Il; Wed, 06 Jul 2022 17:43:21 +0000 Received: from mail-qt1-x82c.google.com ([2607:f8b0:4864:20::82c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993D-00Blp1-GC for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:01 +0000 Received: by mail-qt1-x82c.google.com with SMTP id q16so19272887qtn.5 for ; Wed, 06 Jul 2022 10:42:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TG2FjICzQ0xh3fVBpczYaWRTL00g/5rABwtqlLaAaqg=; b=GzcRM/QHMNCvt0LiOl4IDf7yACBLxgINtoKLQMKFSlpnzBvvXSjPGVi6Gs2POapvas ek5tLwY266CkygyASovGpUmsZd1oIDJi/QmUCCRDWiCpa9rqKv9c1Bx3xKK9++7eUYVz WDWUE02JCTwd5VAkaX7ytRJuxE4/sjKIk0eRpzWP4NzK8DMdzqip26NkDVIdPe6mkDR9 QFB3WKM5zjBKgFiqkBKyAwOYhmseQHvV45qnEH4CDtevw68bG+MfUBPe9Y9q9QSgn7cL WYFhcJWvTEOQrHFGw8ziWB8EW18fWH8eblOzrb6Yx9CaPBQlzYM7amG8B2PkT17F/w+J jB4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TG2FjICzQ0xh3fVBpczYaWRTL00g/5rABwtqlLaAaqg=; b=lLb3cvszc3j9F8CDm/30AOlWrd8kXCmKnENJiVBgesIlMNMMV5ZQsOFeMeo/0RjabH rdh9o1QV6qA6siGxNoBKkyNoGFGB9wV+HvpIjPMlKT6QwGNMG4A2tLdKSppZCPMNq4LA f/2b3/BIPwZKWg+Dk5Xh8By6EhlBaV9XPIjB8X6kA85H4XoYyDFH9zOUm/4hZSoqkDsB 5qmKrdzepbVeIFeFD3Ap6z1bbS0/1JxU8hpmj6qwJA02/7LVDWTK7Qd1Tq2ZhZcA/sSX tDSc59KOY2Ak2bWFnCQiqm6yVRtcuAZ4mVgXHBOqFJ0OGn2uhQYPXfXlvt8ntotFmXq4 rCgg== X-Gm-Message-State: AJIora+BL72ufN8nHUgC3kB3eA4IZfOuJwmGwpuNJlP83ayOzdscGHg4 G0OcVo++NBvZbasjdfjJjDM= X-Google-Smtp-Source: AGRyM1s333Xv6gpTJXmluiXObWwMdV2ybLffCIbKwNaD2x7UvScSwWF1yaI3o5Vabou8zRCSQJDVvA== X-Received: by 2002:ac8:5956:0:b0:31d:28dc:17c with SMTP id 22-20020ac85956000000b0031d28dc017cmr31947503qtz.295.1657129377974; Wed, 06 Jul 2022 10:42:57 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.ar.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id bq30-20020a05620a469e00b006a785ba0c25sm20860422qkb.77.2022.07.06.10.42.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:42:57 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 01/10] arm: align find_bit declarations with generic kernel Date: Wed, 6 Jul 2022 10:42:44 -0700 Message-Id: <20220706174253.4175492-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104259_584766_CA9BC390 X-CRM114-Status: GOOD ( 11.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARM has their own implementation for find_bit functions, and function declarations are different with those in generic headers. Fix it. Signed-off-by: Yury Norov --- arch/arm/include/asm/bitops.h | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/arm/include/asm/bitops.h b/arch/arm/include/asm/bitops.h index 8e94fe7ab5eb..714440fa2fc6 100644 --- a/arch/arm/include/asm/bitops.h +++ b/arch/arm/include/asm/bitops.h @@ -160,18 +160,20 @@ extern int _test_and_change_bit(int nr, volatile unsigned long * p); /* * Little endian assembly bitops. nr = 0 -> byte 0 bit 0. */ -extern int _find_first_zero_bit_le(const unsigned long *p, unsigned size); -extern int _find_next_zero_bit_le(const unsigned long *p, int size, int offset); -extern int _find_first_bit_le(const unsigned long *p, unsigned size); -extern int _find_next_bit_le(const unsigned long *p, int size, int offset); +unsigned long _find_first_zero_bit_le(const unsigned long *p, unsigned long size); +unsigned long _find_next_zero_bit_le(const unsigned long *p, + unsigned long size, unsigned long offset); +unsigned long _find_first_bit_le(const unsigned long *p, unsigned long size); +unsigned long _find_next_bit_le(const unsigned long *p, unsigned long size, unsigned long offset); /* * Big endian assembly bitops. nr = 0 -> byte 3 bit 0. */ -extern int _find_first_zero_bit_be(const unsigned long *p, unsigned size); -extern int _find_next_zero_bit_be(const unsigned long *p, int size, int offset); -extern int _find_first_bit_be(const unsigned long *p, unsigned size); -extern int _find_next_bit_be(const unsigned long *p, int size, int offset); +unsigned long _find_first_zero_bit_be(const unsigned long *p, unsigned long size); +unsigned long _find_next_zero_bit_be(const unsigned long *p, + unsigned long size, unsigned long offset); +unsigned long _find_first_bit_be(const unsigned long *p, unsigned long size); +unsigned long _find_next_bit_be(const unsigned long *p, unsigned long size, unsigned long offset); #ifndef CONFIG_SMP /* From patchwork Wed Jul 6 17:42:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA4E3C433EF for ; Wed, 6 Jul 2022 17:44:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=skB6YI3/V7gOzLC/JvPri1xy2Bky43rsr3Sh581I9+A=; b=Zn7bmlPoBVtD7I NJpuz4MqvOziENDTd4BkCt/xxJSqw2RWUdHIoBDfWgPMBKkUT6pPeiW+a8T3d3uQv9XNY4gB5GK1v qLyUAUGiLhyA0+uUTTJZLKXwy+23Eu4KkNbHmNWPPqFCr4JxIWcE08Rc5hWXSkhwE6iqlVLRr78GH orLCqeG12YRV4zIka611E8jc1IY4ajTok/Tt/ecoFT0fV1dwbVd9R5QJVZOWl0aUAsJi1RodICRoZ Je5Kee92I/ShxiV5T2R2wJ5JmOccehBBk9LEgJfHPVmUKIbV2y6PGtXUflfJiSV5shCTpbu7z+60Q pw6FDztQXdsN1UEVuSBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993m-00BmA6-Mw; Wed, 06 Jul 2022 17:43:34 +0000 Received: from mail-qt1-x831.google.com ([2607:f8b0:4864:20::831]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993F-00Blpp-5f for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:03 +0000 Received: by mail-qt1-x831.google.com with SMTP id c13so19252518qtq.10 for ; Wed, 06 Jul 2022 10:43:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5abwFJX0caBzFF+Js6bgQ/JBYtg6GWcVuildYBtbAwo=; b=ltTH5T197YMI/ecIYBQHAB6Y/dPuvYwIG359XO4VA8V/0qN9c5mgWaKRYQ2MmSmZDV 58xsW1LxjuQl3Zu+GUXlmHc0hwzEGylmN1ynMjYw23trwjNr0R6Ph+Busm4f3W+cNSEm PhUJiWuI4pwB1U3ssx15sznLuRDjMV/+1NXx0NbIYAOdU6VwF3ITUBUgB1/uwRpZxsXs RDm0uv0WUTrd8oN2MHI7gLYhS2N1ZsxZut/8zhvpBEyAuudMTMO0PAFOx/I0llYGy9mU myMU2e5SAShjorTLAsH2FnYClZRAXzWH4zYPub5rDUVy/DB80AjmgoOsPMGhMs5kmeZB rT8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5abwFJX0caBzFF+Js6bgQ/JBYtg6GWcVuildYBtbAwo=; b=vjtAfehQERZcihgk1xubo5xyg3Y0xhxLlOU/jACJElOYkYl/crtThK8YlJODRFhMW3 164GTMXVKEqPBtROL6qyS61fql7eQs6izMLFdCFaxEdnspkR7PT/Y6AfR/8mk9YAgOBL uh/P71d1SC79sAfSB7JAmPYf8dtmvM4OTDsMfDrWZ+3/yZ5ksSz/lpjyN76V0eUlCkRY Gg3PWvnfaJNA4WG0N/s+I/FgIupDjYwCpt8wPv+lwTqDzLqRznMfRKsI6f8pEfoHv4k+ OQBoUyWFVXp7Gjk7FS3M/l72hkGteKzRauM9h8PrUYB7CdUYgD7FhYymCixWl0vh1ng2 0n8Q== X-Gm-Message-State: AJIora8Lrd+DdghyK7+PDEsISrAwrn6AcnqQMkuwGXmwZo8VfsYxDBsO /0APxd2wS271i2SN/2TghmFXGWYzmHcpLA== X-Google-Smtp-Source: AGRyM1uiW459RTrDDsQ0Rz+OFcqJZKsphMfYnHKnl+HUE2gKmt6T0bnFwtG84dHyfM3e2aXVWguMFw== X-Received: by 2002:ac8:5f51:0:b0:31d:2909:bf56 with SMTP id y17-20020ac85f51000000b0031d2909bf56mr32919215qta.73.1657129379169; Wed, 06 Jul 2022 10:42:59 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.fl.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id w11-20020a05622a190b00b003162a22f8f4sm21195911qtc.49.2022.07.06.10.42.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:42:58 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 02/10] lib/bitmap: change return types to bool where appropriate Date: Wed, 6 Jul 2022 10:42:45 -0700 Message-Id: <20220706174253.4175492-3-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104301_250685_B927A87B X-CRM114-Status: GOOD ( 14.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some bitmap functions return boolean results in int variables. Fix it by changing return types to bool. Signed-off-by: Yury Norov --- include/linux/bitmap.h | 8 ++++---- lib/bitmap.c | 4 ++-- tools/include/linux/bitmap.h | 8 ++++---- tools/lib/bitmap.c | 2 +- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 2e6cd5681040..85aace699b2b 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -148,13 +148,13 @@ void __bitmap_shift_left(unsigned long *dst, const unsigned long *src, unsigned int shift, unsigned int nbits); void bitmap_cut(unsigned long *dst, const unsigned long *src, unsigned int first, unsigned int cut, unsigned int nbits); -int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, +bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); -int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, +bool __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); void __bitmap_replace(unsigned long *dst, const unsigned long *old, const unsigned long *new, @@ -303,7 +303,7 @@ void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits); bitmap_copy_clear_tail((unsigned long *)(buf), (const unsigned long *)(bitmap), (nbits)) #endif -static inline int bitmap_and(unsigned long *dst, const unsigned long *src1, +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, const unsigned long *src2, unsigned int nbits) { if (small_const_nbits(nbits)) @@ -329,7 +329,7 @@ static inline void bitmap_xor(unsigned long *dst, const unsigned long *src1, __bitmap_xor(dst, src1, src2, nbits); } -static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, +static inline bool bitmap_andnot(unsigned long *dst, const unsigned long *src1, const unsigned long *src2, unsigned int nbits) { if (small_const_nbits(nbits)) diff --git a/lib/bitmap.c b/lib/bitmap.c index b18e31ea6e66..098fd9db2363 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -237,7 +237,7 @@ void bitmap_cut(unsigned long *dst, const unsigned long *src, } EXPORT_SYMBOL(bitmap_cut); -int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, +bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k; @@ -275,7 +275,7 @@ void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, } EXPORT_SYMBOL(__bitmap_xor); -int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, +bool __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k; diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index afdf93bebaaf..2ae7ab8ed7d1 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -14,7 +14,7 @@ int __bitmap_weight(const unsigned long *bitmap, int bits); void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, int bits); -int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, +bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits); bool __bitmap_equal(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits); @@ -45,7 +45,7 @@ static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits); } -static inline int bitmap_empty(const unsigned long *src, unsigned nbits) +static inline bool bitmap_empty(const unsigned long *src, unsigned int nbits) { if (small_const_nbits(nbits)) return ! (*src & BITMAP_LAST_WORD_MASK(nbits)); @@ -53,7 +53,7 @@ static inline int bitmap_empty(const unsigned long *src, unsigned nbits) return find_first_bit(src, nbits) == nbits; } -static inline int bitmap_full(const unsigned long *src, unsigned int nbits) +static inline bool bitmap_full(const unsigned long *src, unsigned int nbits) { if (small_const_nbits(nbits)) return ! (~(*src) & BITMAP_LAST_WORD_MASK(nbits)); @@ -146,7 +146,7 @@ size_t bitmap_scnprintf(unsigned long *bitmap, unsigned int nbits, * @src2: operand 2 * @nbits: size of bitmap */ -static inline int bitmap_and(unsigned long *dst, const unsigned long *src1, +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, const unsigned long *src2, unsigned int nbits) { if (small_const_nbits(nbits)) diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c index 354f8cdc0880..2e351d63fdba 100644 --- a/tools/lib/bitmap.c +++ b/tools/lib/bitmap.c @@ -57,7 +57,7 @@ size_t bitmap_scnprintf(unsigned long *bitmap, unsigned int nbits, return ret; } -int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, +bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k; From patchwork Wed Jul 6 17:42:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7FD5BC433EF for ; Wed, 6 Jul 2022 17:44:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DN5idwfonUFIXNJmXZTq8LOoap9tD917PawgAHl5AxI=; b=wqw6gZwDUDJqa9 V33zlO5/C47OzDHS7apblwNvVSUgTcm8R4uZm3OtJE9V3VHBOK4rblpEvMYb0bZtVp16rkOSD09wf 3YqhmcsbE8HZZ8dVf2TmEhq+IpdqwhQeQN4dlCp+PNgBb62vn684pf3R8TRtt9G0OgoHywP0tlw6V EX/bS8MWsQGUXQw9bS04Yju87gDB94Keqd3VO3IIWBkcOImrkvAQz+OCnrBPlbpcGKM4mqzXpGzeK 0nGVw69Jxu4yefOwxTVmzM09oO3vXjLuUfMEa61i/w/Yx19P0byZu1xZnkIPlWDjWuaz7xFH8bTh0 rgN+jhqRB6vu7pbDlDuw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9940-00BmFj-0O; Wed, 06 Jul 2022 17:43:48 +0000 Received: from mail-qt1-x833.google.com ([2607:f8b0:4864:20::833]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993F-00Blqg-Vq for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:04 +0000 Received: by mail-qt1-x833.google.com with SMTP id i11so19283265qtr.4 for ; Wed, 06 Jul 2022 10:43:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5zrDKrgWjWYoqhLNgKXCX7OnjR9UTrCEG+O0rRKPH50=; b=heIFXapJdEzowqiFB2mS/xtMJxdMxoc2ubC65bgzqOxHnqbym8ZEcyKW7PFMQuQ0FO U/VqAR1hkNcMHGaETyC9JqpX/Ak1kpp4auixjtWIsc3C0SYoaz0KLazSBI8jT6Iyb7j0 AGFcrdV+9ut6Knb9YEtH1ef3hGVjZfE9lvsyC5b4/0aqs7MckkfzQdmG1rqbQBenK7LD 3A5+KzBgFXm0ePeRRo+I8O8TOvleYFVDuD4YE8OE1YgKTuqjnpiADOVq/zkimux1xOjj DqqYwC8LabwWYX0gke3X5o7qzErBACQTn5tAb+kUmpApr8GPeEl9bWbYPJMxZDdXbSnZ Jtxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5zrDKrgWjWYoqhLNgKXCX7OnjR9UTrCEG+O0rRKPH50=; b=eev2rXnxI7H6XHZKk6yA0Y6D6sSde0GGrjnSzXlFdOSRYKvDmZpR50ywjsqJOQ7WPd S/1GuAFWQlW4uO1k4VipoA4ENuEHNGHiLNhZt9k5n2DdR0l92ZDfJlolAAtYUEJADi9V +xQs3MmbaVLgFYSxtYALj7Qjmfxplz38EZ9g2KkWwwhgv9stFojjrAQGWiZdM2O/93tX XjjQoTnHkIV8CzA32ofWVMod1ulViypxDN4E/sP00ZI9GL1Qk3yPLGnByeJ+4el+zNbV 7fOJM6HxGK7cRzIkQl6/1oYo1oLzqfhRJNmnUkTXCIAyPb6dZ3U7XJca852qHHWCMBOq hUUg== X-Gm-Message-State: AJIora9nnDJZcrwX3RVJlvlc/JQAM4ziesmMmxriQbz3QMd5AFtIUQuh vUKJMRoMTxfCHQ4Fi6v7ops= X-Google-Smtp-Source: AGRyM1sgYG5ZA1Q43on9m/YHXkTTeIV7n/NLBKoToO6i4hQ4nI2FamaIV6GitTV1JyZvJs3oDAcsXQ== X-Received: by 2002:a05:6214:2506:b0:470:2c9c:65fa with SMTP id gf6-20020a056214250600b004702c9c65famr37643155qvb.117.1657129379917; Wed, 06 Jul 2022 10:42:59 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.ar.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id x2-20020ae9e902000000b006a6a6f148e6sm29882134qkf.17.2022.07.06.10.42.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:42:59 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 03/10] lib/bitmap: change type of bitmap_weight to unsigned long Date: Wed, 6 Jul 2022 10:42:46 -0700 Message-Id: <20220706174253.4175492-4-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104302_097828_A43B2535 X-CRM114-Status: GOOD ( 13.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org bitmap_weight() doesn't return negative values, so change it's type to unsigned long. It may help compiler to generate better code and catch bugs. Signed-off-by: Yury Norov --- include/linux/bitmap.h | 5 +++-- lib/bitmap.c | 5 ++--- tools/include/linux/bitmap.h | 4 ++-- tools/lib/bitmap.c | 4 ++-- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 85aace699b2b..a92149f415d2 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -163,7 +163,7 @@ bool __bitmap_intersects(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); bool __bitmap_subset(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); -int __bitmap_weight(const unsigned long *bitmap, unsigned int nbits); +unsigned long __bitmap_weight(const unsigned long *bitmap, unsigned int nbits); void __bitmap_set(unsigned long *map, unsigned int start, int len); void __bitmap_clear(unsigned long *map, unsigned int start, int len); @@ -419,7 +419,8 @@ static inline bool bitmap_full(const unsigned long *src, unsigned int nbits) return find_first_zero_bit(src, nbits) == nbits; } -static __always_inline int bitmap_weight(const unsigned long *src, unsigned int nbits) +static __always_inline +unsigned long bitmap_weight(const unsigned long *src, unsigned int nbits) { if (small_const_nbits(nbits)) return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits)); diff --git a/lib/bitmap.c b/lib/bitmap.c index 098fd9db2363..b580b381eca1 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -333,10 +333,9 @@ bool __bitmap_subset(const unsigned long *bitmap1, } EXPORT_SYMBOL(__bitmap_subset); -int __bitmap_weight(const unsigned long *bitmap, unsigned int bits) +unsigned long __bitmap_weight(const unsigned long *bitmap, unsigned int bits) { - unsigned int k, lim = bits/BITS_PER_LONG; - int w = 0; + unsigned long k, w = 0, lim = bits/BITS_PER_LONG; for (k = 0; k < lim; k++) w += hweight_long(bitmap[k]); diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index 2ae7ab8ed7d1..ae1852e39142 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -11,7 +11,7 @@ #define DECLARE_BITMAP(name,bits) \ unsigned long name[BITS_TO_LONGS(bits)] -int __bitmap_weight(const unsigned long *bitmap, int bits); +unsigned long __bitmap_weight(const unsigned long *bitmap, unsigned int bits); void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, int bits); bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, @@ -61,7 +61,7 @@ static inline bool bitmap_full(const unsigned long *src, unsigned int nbits) return find_first_zero_bit(src, nbits) == nbits; } -static inline int bitmap_weight(const unsigned long *src, unsigned int nbits) +static inline unsigned long bitmap_weight(const unsigned long *src, unsigned int nbits) { if (small_const_nbits(nbits)) return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits)); diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c index 2e351d63fdba..e1fafc131a49 100644 --- a/tools/lib/bitmap.c +++ b/tools/lib/bitmap.c @@ -5,9 +5,9 @@ */ #include -int __bitmap_weight(const unsigned long *bitmap, int bits) +unsigned long __bitmap_weight(const unsigned long *bitmap, unsigned int bits) { - int k, w = 0, lim = bits/BITS_PER_LONG; + unsigned long k, w = 0, lim = bits/BITS_PER_LONG; for (k = 0; k < lim; k++) w += hweight_long(bitmap[k]); From patchwork Wed Jul 6 17:42:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6B04C433EF for ; Wed, 6 Jul 2022 17:45:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UY9bse7MZb4qU+i/2kRqGkAYGv1huiKI+QLsO4hPKIo=; b=fkzpp9lH47WZIi UubSOoPZ36NqBc0ld0Za4DwbM4IFnKS0YVY8svGhpSTYuk87Zi59A9N/mHFH1m6r7Gkw7+SYvJ6Kr 9kH8AmfzVGkT158BPNGv1pdx8tgEpBWYUJ9PSuAUA9YBckLd6u0YQCGYLy7KkilxtzEsz35sragSx efQ4f067RpeKVpI5LE1A07NiPw7lz8h69udO8vc7jds9L6tJJnGHlB7/IQTc/ewq1YIOwxQEN5OZA F1pAQp7MCnHJsPWNCEmn0jFoqMiucuZes2n4zZZHoAl0xHhdVqSVsgjzyNbh9IiOzi7IaIDDKgBAl IG2wKiOSKiLD37xXbx2w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o994C-00BmKb-Hy; Wed, 06 Jul 2022 17:44:00 +0000 Received: from mail-qk1-x72b.google.com ([2607:f8b0:4864:20::72b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993G-00BlrN-F4 for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:06 +0000 Received: by mail-qk1-x72b.google.com with SMTP id v6so11604833qkh.2 for ; Wed, 06 Jul 2022 10:43:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JCJEdfzYxuANLNWAxL8CQ7FpG0f6ZAq5X4D5qi7db0c=; b=DlO/3qfZ5SSLrurU8TbdBO3ykda2jyUu5uXNtEnQ4tu3AGC7cNFHyY+D2NSEg6ZaBG 1f6immiGZzz70rKonuz5vWo1KVSlX0yJmmc0sbvOGdHX1cC0RmG10khnUrBzCRi8xfdj e1IOH+ymxc4QGC09nAE3BsZS/vpSFggfxT/Ju/dUGOqNC9xdApSAk20YhCGMFyCt+7uv KSrryPl+vZMQccCl8cj7RgYIjlZjztJPJYwO+odSVn9Y0JMChn/k/c7/UCgw3I8NwDH/ MWN1A/cf2aveIjoTbKiyR2goy4betfbXx4oxkhBynEasZGgcFPpIIdLcbAcok+M/6nbs SwaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JCJEdfzYxuANLNWAxL8CQ7FpG0f6ZAq5X4D5qi7db0c=; b=EqPvGwHvJUdF3IEiVX5Ze7uuuNAKO8x+NDl/IMyHK/f69Vp9eO492QGbxM+LouijHg eh7ZQhkCaOTuguHy1lNKARdvqq62d58tEqx6E2s+MOEQVyLlnPOXUzTDv7oOD+n53Ehd URiGyxw38Wcw+jxvgVeunF6E5A4mZb68mEzFquNQp2KETqde6zadPTeWE5zg1Pc/fyl6 nsgB+PUisDHQcXjPQ3OqYC1KirebhZIdkjhwU9EjlQA20EmqjwSPY5pAjav3w8yg03Q4 Mwj+EJgd2W98tRhH5UDpgI5WcKe6e7HSQK1DEgal0AnbMgdBHuQ/mBwj7hmfrBH2DK4x pA7g== X-Gm-Message-State: AJIora8TnhcualttitloH6EQRg3pmbjV9nI8AYZb5n/PfZoW0D8oVzIO PxbsIyLSoL98YZBxtOFuJm8= X-Google-Smtp-Source: AGRyM1s+B46jQkrgWjQT9gY1k2Eac7Vkm7juIehWaTjcv9o5gxS0vNO3Xns1iF8qG2/K8x0tu2kPTQ== X-Received: by 2002:a05:620a:4505:b0:6b3:7c51:537c with SMTP id t5-20020a05620a450500b006b37c51537cmr11461188qkp.69.1657129380906; Wed, 06 Jul 2022 10:43:00 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.ar.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id a21-20020a05620a16d500b006a7502d0070sm28319377qkn.21.2022.07.06.10.43.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:43:00 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 04/10] cpumask: change return types to bool where appropriate Date: Wed, 6 Jul 2022 10:42:47 -0700 Message-Id: <20220706174253.4175492-5-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104302_638797_19411038 X-CRM114-Status: GOOD ( 12.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some cpumask functions have integer return types where return values are naturally booleans. Signed-off-by: Yury Norov --- include/linux/cpumask.h | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index fe29ac7cc469..b54e27d9da6b 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -372,9 +372,9 @@ static __always_inline void __cpumask_clear_cpu(int cpu, struct cpumask *dstp) * @cpu: cpu number (< nr_cpu_ids) * @cpumask: the cpumask pointer * - * Returns 1 if @cpu is set in @cpumask, else returns 0 + * Returns true if @cpu is set in @cpumask, else returns false */ -static __always_inline int cpumask_test_cpu(int cpu, const struct cpumask *cpumask) +static __always_inline bool cpumask_test_cpu(int cpu, const struct cpumask *cpumask) { return test_bit(cpumask_check(cpu), cpumask_bits((cpumask))); } @@ -384,11 +384,11 @@ static __always_inline int cpumask_test_cpu(int cpu, const struct cpumask *cpuma * @cpu: cpu number (< nr_cpu_ids) * @cpumask: the cpumask pointer * - * Returns 1 if @cpu is set in old bitmap of @cpumask, else returns 0 + * Returns true if @cpu is set in old bitmap of @cpumask, else returns false * * test_and_set_bit wrapper for cpumasks. */ -static __always_inline int cpumask_test_and_set_cpu(int cpu, struct cpumask *cpumask) +static __always_inline bool cpumask_test_and_set_cpu(int cpu, struct cpumask *cpumask) { return test_and_set_bit(cpumask_check(cpu), cpumask_bits(cpumask)); } @@ -398,11 +398,11 @@ static __always_inline int cpumask_test_and_set_cpu(int cpu, struct cpumask *cpu * @cpu: cpu number (< nr_cpu_ids) * @cpumask: the cpumask pointer * - * Returns 1 if @cpu is set in old bitmap of @cpumask, else returns 0 + * Returns true if @cpu is set in old bitmap of @cpumask, else returns false * * test_and_clear_bit wrapper for cpumasks. */ -static __always_inline int cpumask_test_and_clear_cpu(int cpu, struct cpumask *cpumask) +static __always_inline bool cpumask_test_and_clear_cpu(int cpu, struct cpumask *cpumask) { return test_and_clear_bit(cpumask_check(cpu), cpumask_bits(cpumask)); } @@ -431,9 +431,9 @@ static inline void cpumask_clear(struct cpumask *dstp) * @src1p: the first input * @src2p: the second input * - * If *@dstp is empty, returns 0, else returns 1 + * If *@dstp is empty, returns false, else returns true */ -static inline int cpumask_and(struct cpumask *dstp, +static inline bool cpumask_and(struct cpumask *dstp, const struct cpumask *src1p, const struct cpumask *src2p) { @@ -474,9 +474,9 @@ static inline void cpumask_xor(struct cpumask *dstp, * @src1p: the first input * @src2p: the second input * - * If *@dstp is empty, returns 0, else returns 1 + * If *@dstp is empty, returns false, else returns true */ -static inline int cpumask_andnot(struct cpumask *dstp, +static inline bool cpumask_andnot(struct cpumask *dstp, const struct cpumask *src1p, const struct cpumask *src2p) { @@ -539,9 +539,9 @@ static inline bool cpumask_intersects(const struct cpumask *src1p, * @src1p: the first input * @src2p: the second input * - * Returns 1 if *@src1p is a subset of *@src2p, else returns 0 + * Returns true if *@src1p is a subset of *@src2p, else returns false */ -static inline int cpumask_subset(const struct cpumask *src1p, +static inline bool cpumask_subset(const struct cpumask *src1p, const struct cpumask *src2p) { return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p), From patchwork Wed Jul 6 17:42:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7056DC433EF for ; Wed, 6 Jul 2022 17:46:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZOeR7EGMA8FsA2hT2ua+aAVQDxIxMAee2xgXFQHWcuw=; b=dFYW9Bo8IQLKJH ML8nbpnZ8hwBMCg2gpNF1zNXHDyab3aEgbffzJTng+zKF0lUv18aVPQ6szAaT36oUcSqrLrsNlINO 9NQ1rGUphJobHNSRoxPzjmEXYF+8UJrP0AtW5VoWU+3e5v+kxfysLjN2WhipU5Y+2uVmGL67Hhllx vX2hy3rJKQRf57yP7yZBt+RRvztcnmbKNxwieI/mcOjTIMGZS7uHGh7/S2TinnGMtU+xCqfgiiVbR kHLvshTI0LJyldOYZ/7aIpcLBp83o7k8u9qLkBgB4D9EWoWswoCGbotKcRG6b6yGQtL/REvKP6Iq2 0q5j6CICWub7LHBvwgMw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9959-00BmmA-52; Wed, 06 Jul 2022 17:44:59 +0000 Received: from mail-qk1-x731.google.com ([2607:f8b0:4864:20::731]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993H-00Bls6-OU for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:09 +0000 Received: by mail-qk1-x731.google.com with SMTP id r138so11566898qke.13 for ; Wed, 06 Jul 2022 10:43:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dALoUA4TxajQrqRALn6VbZI9jeqizcyFDqXo6cANVYg=; b=nyC2+Sd3jhh/LEdWlSwYv/Wi4xBn2p0D6oGj4gyhLtswzAziH455Bo9wgTrHjKchaV CgUOBfvEIp05FREyeEJMqK5TnX3mvRi6YSZnByT/LapbrtBBCTloUMHia4TbdAzGAGGH yY1KGijCX0HOQO/4Fwk0fjkWegstt6IE3XkXQbCLYVn+Fhs3HUBfRuSjlDJWq7bBd1FO MyD9Gvd/RAl4BUcDs/vKsJGcH5CMvMnBRnNZLsbxy5ZsmvDEmVUou6TFYWQuwlwqmDWf XQFta9TFMS1sbaW+k3Fb4Zoa0W/mQbOCznzdToKyxHsXU+B5RB7a6dgcM8Q8VepDZxAo 6lTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dALoUA4TxajQrqRALn6VbZI9jeqizcyFDqXo6cANVYg=; b=xPfya2ufO+0bd4k/VGaONUr6R+Ciay5uaIlO0cMFKqs4z6KVvBMYZCEyZ40pmluDfr 7KfcRebSSN9TY2jGGYo2OBhSKsU/C5/2Cu2gYub0+VvBlWzCPkGHs6KxI4AN9L1zSPij vRjUcU/BnBp0fnCaSjfO7i5J+BEgfYT1SO6ap8k1XZZVV+9SMQudiPo38+uRklW9C90F s37eeyAjHLoxKBq7yRjNQ7gfeXPk+DDRxV0D19vLlIZ44zqJpVF3+nmhtSUYIN1rOYHc FVfUeDIFtP3LKJnsFUwyF5r5TcgQ1kfK524WGTRY4ZRJ7jguLuA8et3M+taWdO9+cD2N dM5w== X-Gm-Message-State: AJIora/BYL/uWS0kbUMrS8jDxrjv5dT75rJ1MMffq8MRM4de5CqmlEqU fDQ2Tx3U0zme4B2TsJY+SZM= X-Google-Smtp-Source: AGRyM1uJ3hnFVUVKR+5A0tRR91nKmtjdOKPHZTHRMDZPsx7tXoYTXb9HZejmviIagEKCPhwWCBzbHg== X-Received: by 2002:a37:644b:0:b0:6af:3529:2bb8 with SMTP id y72-20020a37644b000000b006af35292bb8mr27450858qkb.341.1657129381738; Wed, 06 Jul 2022 10:43:01 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.fl.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id v38-20020a05622a18a600b0031a84ecd4d1sm20818859qtc.95.2022.07.06.10.43.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:43:01 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 05/10] lib/cpumask: change return types to unsigned where appropriate Date: Wed, 6 Jul 2022 10:42:48 -0700 Message-Id: <20220706174253.4175492-6-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104303_910289_FB904F21 X-CRM114-Status: GOOD ( 14.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Switch return types to unsigned int where return values cannot be negative. Signed-off-by: Yury Norov --- include/linux/cpumask.h | 14 +++++++------- lib/cpumask.c | 18 +++++++++--------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index b54e27d9da6b..760022bcb925 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -176,12 +176,12 @@ static inline unsigned int cpumask_local_spread(unsigned int i, int node) return 0; } -static inline int cpumask_any_and_distribute(const struct cpumask *src1p, +static inline unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, const struct cpumask *src2p) { return cpumask_first_and(src1p, src2p); } -static inline int cpumask_any_distribute(const struct cpumask *srcp) +static inline unsigned int cpumask_any_distribute(const struct cpumask *srcp) { return cpumask_first(srcp); } @@ -258,12 +258,12 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp) return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1); } -int __pure cpumask_next_and(int n, const struct cpumask *, const struct cpumask *); -int __pure cpumask_any_but(const struct cpumask *mask, unsigned int cpu); +unsigned int __pure cpumask_next_and(int n, const struct cpumask *, const struct cpumask *); +unsigned int __pure cpumask_any_but(const struct cpumask *mask, unsigned int cpu); unsigned int cpumask_local_spread(unsigned int i, int node); -int cpumask_any_and_distribute(const struct cpumask *src1p, +unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, const struct cpumask *src2p); -int cpumask_any_distribute(const struct cpumask *srcp); +unsigned int cpumask_any_distribute(const struct cpumask *srcp); /** * for_each_cpu - iterate over every cpu in a mask @@ -289,7 +289,7 @@ int cpumask_any_distribute(const struct cpumask *srcp); (cpu) = cpumask_next_zero((cpu), (mask)), \ (cpu) < nr_cpu_ids;) -extern int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap); +unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap); /** * for_each_cpu_wrap - iterate over every cpu in a mask, starting at a specified location diff --git a/lib/cpumask.c b/lib/cpumask.c index a971a82d2f43..da68f6bbde44 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -31,7 +31,7 @@ EXPORT_SYMBOL(cpumask_next); * * Returns >= nr_cpu_ids if no further cpus set in both. */ -int cpumask_next_and(int n, const struct cpumask *src1p, +unsigned int cpumask_next_and(int n, const struct cpumask *src1p, const struct cpumask *src2p) { /* -1 is a legal arg here. */ @@ -50,7 +50,7 @@ EXPORT_SYMBOL(cpumask_next_and); * Often used to find any cpu but smp_processor_id() in a mask. * Returns >= nr_cpu_ids if no cpus set. */ -int cpumask_any_but(const struct cpumask *mask, unsigned int cpu) +unsigned int cpumask_any_but(const struct cpumask *mask, unsigned int cpu) { unsigned int i; @@ -74,9 +74,9 @@ EXPORT_SYMBOL(cpumask_any_but); * Note: the @wrap argument is required for the start condition when * we cannot assume @start is set in @mask. */ -int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap) +unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap) { - int next; + unsigned int next; again: next = cpumask_next(n, mask); @@ -205,7 +205,7 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) */ unsigned int cpumask_local_spread(unsigned int i, int node) { - int cpu; + unsigned int cpu; /* Wrap: we always want a cpu. */ i %= num_online_cpus(); @@ -243,10 +243,10 @@ static DEFINE_PER_CPU(int, distribute_cpu_mask_prev); * * Returns >= nr_cpu_ids if the intersection is empty. */ -int cpumask_any_and_distribute(const struct cpumask *src1p, +unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, const struct cpumask *src2p) { - int next, prev; + unsigned int next, prev; /* NOTE: our first selection will skip 0. */ prev = __this_cpu_read(distribute_cpu_mask_prev); @@ -262,9 +262,9 @@ int cpumask_any_and_distribute(const struct cpumask *src1p, } EXPORT_SYMBOL(cpumask_any_and_distribute); -int cpumask_any_distribute(const struct cpumask *srcp) +unsigned int cpumask_any_distribute(const struct cpumask *srcp) { - int next, prev; + unsigned int next, prev; /* NOTE: our first selection will skip 0. */ prev = __this_cpu_read(distribute_cpu_mask_prev); From patchwork Wed Jul 6 17:42:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83864C43334 for ; Wed, 6 Jul 2022 17:45:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3R6Qjjkxla+Ws5rM4bmRT0VAvtiWWOewk9jPqTP+EAo=; b=hvL/2dzyjC7wkX ElrfPSx1UqfRlSvZSRzAt67ePaIIqG4ogT78/NWzoXmxF5O75Y3nFw0ubNjb6U9QsM5m3KJgk3SIe sLFJv0+s2hpWi8ixPXH+YIQ8mf8XhcAehjaFI4gkzfdHPDeY9RdAbTTgLHZFq6YjLA3wqvBXf94a0 zJoU5s6ae4YSErmgbeAmcky/Z22SEWJowaTUkXfYBukQjhyAexlcNqKmXkKI3UAIT55VX9No1XPHu G6YuksK2DI4CjVVYXBwQbQjS43ZXDbNfwMbwE8caGGl6SwWxYXiMLV4ddj0ZJTJeOyhFTCqiCYlUh BdrDlSwYJ+C9nNdJWGzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o994T-00BmS0-Ha; Wed, 06 Jul 2022 17:44:18 +0000 Received: from mail-qt1-x82b.google.com ([2607:f8b0:4864:20::82b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993H-00Bltd-Jc for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:08 +0000 Received: by mail-qt1-x82b.google.com with SMTP id r2so19256044qta.0 for ; Wed, 06 Jul 2022 10:43:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gmLdladheqxAWwfcqAo0zECfXUO+llLx3LkvFH+700E=; b=hiMIpOPvM1brHtsKUEsAondMOkcJYY374HhguLWQ/mAEG1sjeYPeDD2R7UVdBEc/zf WNJdA8d2mby20mGe9ATAS0dM/qKUmfTpeFW8kvcp+eKwY5yJHLgWZrj1WEeT1sYNnJSH t7DASNWe/NnxKSck20sRZygA/AiOhgsJEw79PuUdZi0tyYEDgJJVM922ZygtPiNU4B9d H/vRBKSp1Ba7Rk3fOv/Tfv3wZZitZ6g5qZdRo+wqTgSHl5sxGF9Ho+LZUz/XHRcwzkSR zWfaAru7HdMSZeT4W1kucwMQDvDyedeyI1ufccF/h6RmFGFHrH6/Siz8ZqdXuZRRkapw B++Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gmLdladheqxAWwfcqAo0zECfXUO+llLx3LkvFH+700E=; b=6WtRTCz5eJS4yBfOgZVzJentF/U8ZOP0FEeh7BgO7I8UpZdLA3Uyb0Wc1M0p7/o5E1 QoUcBRzkWVg01uQDLcxB+q2Smq6XUlyTjoJoxb4fcb6ajiKvHSjKIK36eNCr/LL1br5U BfeMdXcZs8dXgMQF6xOJ1k2ESdkHq1Byv99FMZVOLvFXpCvJW1M/EezeooI8DsoOoIDh 3sqpSh1kGjdod0+ATkPaVWWZSNolF60OH7Xz/+YdrvWqz2bgBGjfkVsMrJ5Jr24+w6at GEDNhRtX9uFfx0Vxw0l/a9X8PTKoQlgUrQa/el0me5nlH7NLP5rnlhyzSdhS7Om9nkUj kMlg== X-Gm-Message-State: AJIora9sY2jDiiHM0RgwYfYOIBngrU/nX1CbWxB0dhHQFCfZLtWAuR34 gIC1XnKpUJh84pvM4HrGJL0= X-Google-Smtp-Source: AGRyM1uMKh5o2uyu2vAixHPJkegWWzRHanfW2sKk3I5GXSG39t97WBzrBudT1Aw70xJOOvOkB9zwTw== X-Received: by 2002:a05:6214:c22:b0:472:f86e:47b2 with SMTP id a2-20020a0562140c2200b00472f86e47b2mr12968747qvd.102.1657129382656; Wed, 06 Jul 2022 10:43:02 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.ar.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id bq9-20020a05620a468900b006af6f0893c6sm19449800qkb.91.2022.07.06.10.43.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:43:02 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 06/10] lib/cpumask: move trivial wrappers around find_bit to the header Date: Wed, 6 Jul 2022 10:42:49 -0700 Message-Id: <20220706174253.4175492-7-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104303_767304_D15D02AA X-CRM114-Status: GOOD ( 18.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To avoid circular dependencies, cpumask keeps simple (almost) one-line wrappers around find_bit() in a c-file. Commit 47d8c15615c0a2 ("include: move find.h from asm_generic to linux") moved find.h header out of asm_generic include path, and it helped to fix many circular dependencies, including some in cpumask.h. This patch moves those one-liners to header files. Signed-off-by: Yury Norov --- include/linux/cpumask.h | 57 ++++++++++++++++++++++++++++++++++++++--- lib/cpumask.c | 55 --------------------------------------- 2 files changed, 54 insertions(+), 58 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index 760022bcb925..ea3de2c2c180 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -241,7 +241,21 @@ static inline unsigned int cpumask_last(const struct cpumask *srcp) return find_last_bit(cpumask_bits(srcp), nr_cpumask_bits); } -unsigned int __pure cpumask_next(int n, const struct cpumask *srcp); +/** + * cpumask_next - get the next cpu in a cpumask + * @n: the cpu prior to the place to search (ie. return will be > @n) + * @srcp: the cpumask pointer + * + * Returns >= nr_cpu_ids if no further cpus set. + */ +static inline +unsigned int cpumask_next(int n, const struct cpumask *srcp) +{ + /* -1 is a legal arg here. */ + if (n != -1) + cpumask_check(n); + return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n + 1); +} /** * cpumask_next_zero - get the next unset cpu in a cpumask @@ -258,8 +272,25 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp) return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1); } -unsigned int __pure cpumask_next_and(int n, const struct cpumask *, const struct cpumask *); -unsigned int __pure cpumask_any_but(const struct cpumask *mask, unsigned int cpu); +/** + * cpumask_next_and - get the next cpu in *src1p & *src2p + * @n: the cpu prior to the place to search (ie. return will be > @n) + * @src1p: the first cpumask pointer + * @src2p: the second cpumask pointer + * + * Returns >= nr_cpu_ids if no further cpus set in both. + */ +static inline +unsigned int cpumask_next_and(int n, const struct cpumask *src1p, + const struct cpumask *src2p) +{ + /* -1 is a legal arg here. */ + if (n != -1) + cpumask_check(n); + return find_next_and_bit(cpumask_bits(src1p), cpumask_bits(src2p), + nr_cpumask_bits, n + 1); +} + unsigned int cpumask_local_spread(unsigned int i, int node); unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, const struct cpumask *src2p); @@ -324,6 +355,26 @@ unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, boo for ((cpu) = -1; \ (cpu) = cpumask_next_and((cpu), (mask1), (mask2)), \ (cpu) < nr_cpu_ids;) + +/** + * cpumask_any_but - return a "random" in a cpumask, but not this one. + * @mask: the cpumask to search + * @cpu: the cpu to ignore. + * + * Often used to find any cpu but smp_processor_id() in a mask. + * Returns >= nr_cpu_ids if no cpus set. + */ +static inline +unsigned int cpumask_any_but(const struct cpumask *mask, unsigned int cpu) +{ + unsigned int i; + + cpumask_check(cpu); + for_each_cpu(i, mask) + if (i != cpu) + break; + return i; +} #endif /* SMP */ #define CPU_BITS_NONE \ diff --git a/lib/cpumask.c b/lib/cpumask.c index da68f6bbde44..cb7262ff8633 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -7,61 +7,6 @@ #include #include -/** - * cpumask_next - get the next cpu in a cpumask - * @n: the cpu prior to the place to search (ie. return will be > @n) - * @srcp: the cpumask pointer - * - * Returns >= nr_cpu_ids if no further cpus set. - */ -unsigned int cpumask_next(int n, const struct cpumask *srcp) -{ - /* -1 is a legal arg here. */ - if (n != -1) - cpumask_check(n); - return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n + 1); -} -EXPORT_SYMBOL(cpumask_next); - -/** - * cpumask_next_and - get the next cpu in *src1p & *src2p - * @n: the cpu prior to the place to search (ie. return will be > @n) - * @src1p: the first cpumask pointer - * @src2p: the second cpumask pointer - * - * Returns >= nr_cpu_ids if no further cpus set in both. - */ -unsigned int cpumask_next_and(int n, const struct cpumask *src1p, - const struct cpumask *src2p) -{ - /* -1 is a legal arg here. */ - if (n != -1) - cpumask_check(n); - return find_next_and_bit(cpumask_bits(src1p), cpumask_bits(src2p), - nr_cpumask_bits, n + 1); -} -EXPORT_SYMBOL(cpumask_next_and); - -/** - * cpumask_any_but - return a "random" in a cpumask, but not this one. - * @mask: the cpumask to search - * @cpu: the cpu to ignore. - * - * Often used to find any cpu but smp_processor_id() in a mask. - * Returns >= nr_cpu_ids if no cpus set. - */ -unsigned int cpumask_any_but(const struct cpumask *mask, unsigned int cpu) -{ - unsigned int i; - - cpumask_check(cpu); - for_each_cpu(i, mask) - if (i != cpu) - break; - return i; -} -EXPORT_SYMBOL(cpumask_any_but); - /** * cpumask_next_wrap - helper to implement for_each_cpu_wrap * @n: the cpu prior to the place to search From patchwork Wed Jul 6 17:42:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 58FA4C433EF for ; Wed, 6 Jul 2022 17:45:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kFRIQ+cZAJ5FNY3QQygSkR8W3YzhE4qUoA6Lzh+an/g=; b=bzl8DnckMt1fJS kl2Btdv7oyDy4rW5wNXO5QzBEUmRPCRLX1ZzXRBGbfeRBKK4LwVhLBYY0S45/4P9/M3ipazOLzIHK +uCWMSrb6d4XZCVxB4rAd6yz31Kenpv986yD7lwV2Dk79DQie17HhS2OWACG5EevQJgApi6/kv4ga 1vWlBlr/N+G3aKCzCQYToc3BBTzckax69TK8v6mRTJ4gQ1k7PgSPC5rz4L4p5GJmqLMKaVDv4A5uj 7JPrrDvTZ4W38UxPjMz5WyZrlkWteQtk+0jfxnfi4Vw5fw62EbaNyIh4QPzDcajUdBTSvvuRXiJD5 QIHm6FeeXC3rPm3XFwnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o994n-00BmbV-2U; Wed, 06 Jul 2022 17:44:37 +0000 Received: from mail-qt1-x82b.google.com ([2607:f8b0:4864:20::82b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993I-00Bloo-BD for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:08 +0000 Received: by mail-qt1-x82b.google.com with SMTP id c13so19252834qtq.10 for ; Wed, 06 Jul 2022 10:43:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+O+hqmbBhgU5CXoQtnpUr8p9FX2X6OnH2SDA7ObfOeo=; b=gq+fqCSXw/GUohUMhLBqsJ/JM1u38sdU3GwI6aRgpOgc3CQuezG/HVCWFZbQ13UJZA X2+1+qkyRtvgCdgTvjhS9mWcUr61AFuX5Gw4IwIsFZDt0ZOQxktQhKkv055IgffGsOfh 3fLYf4ukTfrvq7GWQS8D113Ab7OshwJFQ8GFD+GcVoRbpyt2EOObpDLT4duauJJEbhX8 /AGfKQ3OimLbTFubZg0wuGHIVCyD5yvWoqkqhFyz08C+9PtYeeD4EWFzMoNwSBPfSoVM EOEadaXmB9Wyb/EvNfx+/i6f2Mkv+/ezuDaNPs+P1rSNS2F7Tan1Uoe/+HQ4AXCYHQKV q8mQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+O+hqmbBhgU5CXoQtnpUr8p9FX2X6OnH2SDA7ObfOeo=; b=Hg+duIyQy7eeomqZYHB3P4uiv4vgPbDsJ/LQXVfmn3ji19Bfw9qUJsZ+nALZYlDCcM egwBZB7nf1QeMM0IanxHfr6VUrxzzYg/dko+P3v3Y8pb/aA723SM/ocKvvM86xjiC6/8 SWaJt4n+5eFt49F6qViztUIp/SVNHIXsgoUaThk819bP7KOgoN2pIyuFrjFHHO3GNumH ke4Cr1uxmRGrdBfW/ITqLkGgAPde4vWHEF0OIo9qjrJL7lh3jcIgsWOt+lEqp27SDcek 3XS1NsyiJ6uiX2KMA6ZZTeBJU1aOZ15NKlOO8yjpLornfqZahhjOJRgUDzR/aa8kJNfb +Itg== X-Gm-Message-State: AJIora9x40Mk46rVpzvYrvKqopb6wrxSipUdJC+LEVH3BC0kZqyyWUU/ FcOx1W2t+3FRW2azUdT4Khs= X-Google-Smtp-Source: AGRyM1umMyvm+F1pTR2SejSYHc/XeKM7uQQT5igDl0n9waYKdw5krFIk44y3ooz3s8dm/7lCFJDrbg== X-Received: by 2002:a05:6214:27ea:b0:470:7042:3901 with SMTP id jt10-20020a05621427ea00b0047070423901mr38054881qvb.69.1657129383901; Wed, 06 Jul 2022 10:43:03 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.fl.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id c19-20020a05622a059300b00304edcfa109sm26608392qtb.33.2022.07.06.10.43.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:43:03 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 07/10] headers/deps: mm: Optimize header dependencies Date: Wed, 6 Jul 2022 10:42:50 -0700 Message-Id: <20220706174253.4175492-8-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104304_591184_F88DB2FE X-CRM114-Status: GOOD ( 10.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ingo Molnar There's a couple of superfluous inclusions here - remove them before doing bigger changes. Signed-off-by: Ingo Molnar Signed-off-by: Yury Norov --- include/linux/gfp.h | 3 --- 1 file changed, 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 2d2ccae933c2..52f2c873a7d4 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -2,10 +2,7 @@ #ifndef __LINUX_GFP_H #define __LINUX_GFP_H -#include #include -#include -#include #include /* The typedef is in types.h but we want the documentation here */ From patchwork Wed Jul 6 17:42:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD1DFC433EF for ; Wed, 6 Jul 2022 17:46:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=srspGIIXP5/5Ytl18ukz13HmGtvZFmBtMuHsGimK4kk=; b=m+2viLaH9Kz5Xw xXmdjFinnUfIhwkbomkkXX3VTnecwXZyeQ0aewVumFuonc8PEdmuoo0lUWN+LbYICfEZToXMVp2lL T/j9XIlNFV45GovBytB+A6rnNTMIUdLde8YIIRT4O9GuD6+YYSFJUjdEBszk8sIKxLbgA/gHx2LeL UqMOuRlA5LqX+4YOKcp7EOxpVFqYGp7dyLL/gpwdjbYCVvQR/nEJzSTF0FNdseHgFRhpiM1GFa1fc W2jynTGPmT2qMcddYKZfK2hKAvwf3O4UuZPgDRNqWTCHTCTMwkOTfqUS85gj16DfPqCFlkePoqmZw oF70UgxYlhUJU9fhQhVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o995V-00Bmwl-TQ; Wed, 06 Jul 2022 17:45:22 +0000 Received: from mail-qt1-x831.google.com ([2607:f8b0:4864:20::831]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993K-00Blpp-Md for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:15 +0000 Received: by mail-qt1-x831.google.com with SMTP id c13so19253022qtq.10 for ; Wed, 06 Jul 2022 10:43:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bsiYdF140lek56kOGdYEMP+oPunyoPAlImPHAu3g5u8=; b=NPXXmNmhnTAUw7ICcNGeIxJ5vsU42BuzCHVIw7U0wgmWSIUhHiiX76DVJhin01Ae6J VtrgTgrULN0cyGFDKEr1KiuC3D1M8jD3Nev3SWq2Hc7Cnf6B+a2/GH/n5mpf1hIgdeaD LbIEk4LmlYJU/7REGKX8ygv9rEInbkb0SrYgVDfimCUaLQccTYkc0ixsbJvAjoM/xFpX XtiNJ8cf5bVFAugBOHbDa1Ts5+q54HlwmDnACyfSFwErtPNgLj4TnDlb0xF0tK+rgY7D mdmAgXnbigaTLd1Nad4oG+UeSpx7S7YX3ofYoDllgY6j+FgFHqKXlfqPOSwfMQjHx+EX zvlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bsiYdF140lek56kOGdYEMP+oPunyoPAlImPHAu3g5u8=; b=zLQc0W2Viq9rkYEuETavxJo3uCS+9sIdmO4hjJr/+f/cLxpOK8FIvLwVl+wLS6Nssb Jm1VTMnoIoEEv+Tpu20Mw/B2Xv0kWShX1CxW9zc5rHyYRdvJVmqQ9zE175GJI+LnYTtF RZjEBDm/BqVbd1XCrP0qNCoMECg8yllQOzuDTfGweStKguvJyH0vVtEaiPuVfBuEI86R Fiy0Ng229F/Lvj3zGsTZGzPZ7nc1ZZsNpwr29euiFKwW3EYim1TQYArO2DoY7FB0QxJA 1S/xOa8TDURsWnK5MaRNNwJkr26mE8U4A9gW12SocfaDVbIXzWiUKwpPMna2BG3G44lu Pksw== X-Gm-Message-State: AJIora9d2DRFx6SAVUjLAFlyqECTU6ODbEw63tdhz0+Iqri1FygzcvGr qLm+r5I3E9UDTa3OBkHQ5n8= X-Google-Smtp-Source: AGRyM1u3Eg279VZtdb+HrzmXr37fexjkVU9Kw1aFwkdrskBr5dQ42vlXQmrHIwx7jOCWelAxyW3I8A== X-Received: by 2002:a0c:f40c:0:b0:472:ff75:49ce with SMTP id h12-20020a0cf40c000000b00472ff7549cemr10840543qvl.91.1657129385794; Wed, 06 Jul 2022 10:43:05 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.ar.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id j4-20020a05620a410400b006a6278a2b31sm20362880qko.75.2022.07.06.10.43.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:43:04 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 08/10] headers/deps: mm: Split out of Date: Wed, 6 Jul 2022 10:42:51 -0700 Message-Id: <20220706174253.4175492-9-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_104306_924999_A94181AF X-CRM114-Status: GOOD ( 37.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ingo Molnar This is a much smaller header. Signed-off-by: Ingo Molnar Signed-off-by: Yury Norov --- include/linux/gfp.h | 345 +------------------------------------ include/linux/gfp_types.h | 348 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 350 insertions(+), 343 deletions(-) create mode 100644 include/linux/gfp_types.h diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 52f2c873a7d4..f314be58fa77 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -2,354 +2,13 @@ #ifndef __LINUX_GFP_H #define __LINUX_GFP_H +#include + #include #include -/* The typedef is in types.h but we want the documentation here */ -#if 0 -/** - * typedef gfp_t - Memory allocation flags. - * - * GFP flags are commonly used throughout Linux to indicate how memory - * should be allocated. The GFP acronym stands for get_free_pages(), - * the underlying memory allocation function. Not every GFP flag is - * supported by every function which may allocate memory. Most users - * will want to use a plain ``GFP_KERNEL``. - */ -typedef unsigned int __bitwise gfp_t; -#endif - struct vm_area_struct; -/* - * In case of changes, please don't forget to update - * include/trace/events/mmflags.h and tools/perf/builtin-kmem.c - */ - -/* Plain integer GFP bitmasks. Do not use this directly. */ -#define ___GFP_DMA 0x01u -#define ___GFP_HIGHMEM 0x02u -#define ___GFP_DMA32 0x04u -#define ___GFP_MOVABLE 0x08u -#define ___GFP_RECLAIMABLE 0x10u -#define ___GFP_HIGH 0x20u -#define ___GFP_IO 0x40u -#define ___GFP_FS 0x80u -#define ___GFP_ZERO 0x100u -#define ___GFP_ATOMIC 0x200u -#define ___GFP_DIRECT_RECLAIM 0x400u -#define ___GFP_KSWAPD_RECLAIM 0x800u -#define ___GFP_WRITE 0x1000u -#define ___GFP_NOWARN 0x2000u -#define ___GFP_RETRY_MAYFAIL 0x4000u -#define ___GFP_NOFAIL 0x8000u -#define ___GFP_NORETRY 0x10000u -#define ___GFP_MEMALLOC 0x20000u -#define ___GFP_COMP 0x40000u -#define ___GFP_NOMEMALLOC 0x80000u -#define ___GFP_HARDWALL 0x100000u -#define ___GFP_THISNODE 0x200000u -#define ___GFP_ACCOUNT 0x400000u -#define ___GFP_ZEROTAGS 0x800000u -#ifdef CONFIG_KASAN_HW_TAGS -#define ___GFP_SKIP_ZERO 0x1000000u -#define ___GFP_SKIP_KASAN_UNPOISON 0x2000000u -#define ___GFP_SKIP_KASAN_POISON 0x4000000u -#else -#define ___GFP_SKIP_ZERO 0 -#define ___GFP_SKIP_KASAN_UNPOISON 0 -#define ___GFP_SKIP_KASAN_POISON 0 -#endif -#ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x8000000u -#else -#define ___GFP_NOLOCKDEP 0 -#endif -/* If the above are modified, __GFP_BITS_SHIFT may need updating */ - -/* - * Physical address zone modifiers (see linux/mmzone.h - low four bits) - * - * Do not put any conditional on these. If necessary modify the definitions - * without the underscores and use them consistently. The definitions here may - * be used in bit comparisons. - */ -#define __GFP_DMA ((__force gfp_t)___GFP_DMA) -#define __GFP_HIGHMEM ((__force gfp_t)___GFP_HIGHMEM) -#define __GFP_DMA32 ((__force gfp_t)___GFP_DMA32) -#define __GFP_MOVABLE ((__force gfp_t)___GFP_MOVABLE) /* ZONE_MOVABLE allowed */ -#define GFP_ZONEMASK (__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE) - -/** - * DOC: Page mobility and placement hints - * - * Page mobility and placement hints - * --------------------------------- - * - * These flags provide hints about how mobile the page is. Pages with similar - * mobility are placed within the same pageblocks to minimise problems due - * to external fragmentation. - * - * %__GFP_MOVABLE (also a zone modifier) indicates that the page can be - * moved by page migration during memory compaction or can be reclaimed. - * - * %__GFP_RECLAIMABLE is used for slab allocations that specify - * SLAB_RECLAIM_ACCOUNT and whose pages can be freed via shrinkers. - * - * %__GFP_WRITE indicates the caller intends to dirty the page. Where possible, - * these pages will be spread between local zones to avoid all the dirty - * pages being in one zone (fair zone allocation policy). - * - * %__GFP_HARDWALL enforces the cpuset memory allocation policy. - * - * %__GFP_THISNODE forces the allocation to be satisfied from the requested - * node with no fallbacks or placement policy enforcements. - * - * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg. - */ -#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) -#define __GFP_WRITE ((__force gfp_t)___GFP_WRITE) -#define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL) -#define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE) -#define __GFP_ACCOUNT ((__force gfp_t)___GFP_ACCOUNT) - -/** - * DOC: Watermark modifiers - * - * Watermark modifiers -- controls access to emergency reserves - * ------------------------------------------------------------ - * - * %__GFP_HIGH indicates that the caller is high-priority and that granting - * the request is necessary before the system can make forward progress. - * For example, creating an IO context to clean pages. - * - * %__GFP_ATOMIC indicates that the caller cannot reclaim or sleep and is - * high priority. Users are typically interrupt handlers. This may be - * used in conjunction with %__GFP_HIGH - * - * %__GFP_MEMALLOC allows access to all memory. This should only be used when - * the caller guarantees the allocation will allow more memory to be freed - * very shortly e.g. process exiting or swapping. Users either should - * be the MM or co-ordinating closely with the VM (e.g. swap over NFS). - * Users of this flag have to be extremely careful to not deplete the reserve - * completely and implement a throttling mechanism which controls the - * consumption of the reserve based on the amount of freed memory. - * Usage of a pre-allocated pool (e.g. mempool) should be always considered - * before using this flag. - * - * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves. - * This takes precedence over the %__GFP_MEMALLOC flag if both are set. - */ -#define __GFP_ATOMIC ((__force gfp_t)___GFP_ATOMIC) -#define __GFP_HIGH ((__force gfp_t)___GFP_HIGH) -#define __GFP_MEMALLOC ((__force gfp_t)___GFP_MEMALLOC) -#define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) - -/** - * DOC: Reclaim modifiers - * - * Reclaim modifiers - * ----------------- - * Please note that all the following flags are only applicable to sleepable - * allocations (e.g. %GFP_NOWAIT and %GFP_ATOMIC will ignore them). - * - * %__GFP_IO can start physical IO. - * - * %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the - * allocator recursing into the filesystem which might already be holding - * locks. - * - * %__GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim. - * This flag can be cleared to avoid unnecessary delays when a fallback - * option is available. - * - * %__GFP_KSWAPD_RECLAIM indicates that the caller wants to wake kswapd when - * the low watermark is reached and have it reclaim pages until the high - * watermark is reached. A caller may wish to clear this flag when fallback - * options are available and the reclaim is likely to disrupt the system. The - * canonical example is THP allocation where a fallback is cheap but - * reclaim/compaction may cause indirect stalls. - * - * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim. - * - * The default allocator behavior depends on the request size. We have a concept - * of so called costly allocations (with order > %PAGE_ALLOC_COSTLY_ORDER). - * !costly allocations are too essential to fail so they are implicitly - * non-failing by default (with some exceptions like OOM victims might fail so - * the caller still has to check for failures) while costly requests try to be - * not disruptive and back off even without invoking the OOM killer. - * The following three modifiers might be used to override some of these - * implicit rules - * - * %__GFP_NORETRY: The VM implementation will try only very lightweight - * memory direct reclaim to get some memory under memory pressure (thus - * it can sleep). It will avoid disruptive actions like OOM killer. The - * caller must handle the failure which is quite likely to happen under - * heavy memory pressure. The flag is suitable when failure can easily be - * handled at small cost, such as reduced throughput - * - * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim - * procedures that have previously failed if there is some indication - * that progress has been made else where. It can wait for other - * tasks to attempt high level approaches to freeing memory such as - * compaction (which removes fragmentation) and page-out. - * There is still a definite limit to the number of retries, but it is - * a larger limit than with %__GFP_NORETRY. - * Allocations with this flag may fail, but only when there is - * genuinely little unused memory. While these allocations do not - * directly trigger the OOM killer, their failure indicates that - * the system is likely to need to use the OOM killer soon. The - * caller must handle failure, but can reasonably do so by failing - * a higher-level request, or completing it only in a much less - * efficient manner. - * If the allocation does fail, and the caller is in a position to - * free some non-essential memory, doing so could benefit the system - * as a whole. - * - * %__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller - * cannot handle allocation failures. The allocation could block - * indefinitely but will never return with failure. Testing for - * failure is pointless. - * New users should be evaluated carefully (and the flag should be - * used only when there is no reasonable failure policy) but it is - * definitely preferable to use the flag rather than opencode endless - * loop around allocator. - * Using this flag for costly allocations is _highly_ discouraged. - */ -#define __GFP_IO ((__force gfp_t)___GFP_IO) -#define __GFP_FS ((__force gfp_t)___GFP_FS) -#define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /* Caller can reclaim */ -#define __GFP_KSWAPD_RECLAIM ((__force gfp_t)___GFP_KSWAPD_RECLAIM) /* kswapd can wake */ -#define __GFP_RECLAIM ((__force gfp_t)(___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM)) -#define __GFP_RETRY_MAYFAIL ((__force gfp_t)___GFP_RETRY_MAYFAIL) -#define __GFP_NOFAIL ((__force gfp_t)___GFP_NOFAIL) -#define __GFP_NORETRY ((__force gfp_t)___GFP_NORETRY) - -/** - * DOC: Action modifiers - * - * Action modifiers - * ---------------- - * - * %__GFP_NOWARN suppresses allocation failure reports. - * - * %__GFP_COMP address compound page metadata. - * - * %__GFP_ZERO returns a zeroed page on success. - * - * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself - * is being zeroed (either via __GFP_ZERO or via init_on_alloc, provided that - * __GFP_SKIP_ZERO is not set). This flag is intended for optimization: setting - * memory tags at the same time as zeroing memory has minimal additional - * performace impact. - * - * %__GFP_SKIP_KASAN_UNPOISON makes KASAN skip unpoisoning on page allocation. - * Only effective in HW_TAGS mode. - * - * %__GFP_SKIP_KASAN_POISON makes KASAN skip poisoning on page deallocation. - * Typically, used for userspace pages. Only effective in HW_TAGS mode. - */ -#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) -#define __GFP_COMP ((__force gfp_t)___GFP_COMP) -#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) -#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) -#define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO) -#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON) -#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) - -/* Disable lockdep for GFP context tracking */ -#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) - -/* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (27 + IS_ENABLED(CONFIG_LOCKDEP)) -#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) - -/** - * DOC: Useful GFP flag combinations - * - * Useful GFP flag combinations - * ---------------------------- - * - * Useful GFP flag combinations that are commonly used. It is recommended - * that subsystems start with one of these combinations and then set/clear - * %__GFP_FOO flags as necessary. - * - * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower - * watermark is applied to allow access to "atomic reserves". - * The current implementation doesn't support NMI and few other strict - * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT. - * - * %GFP_KERNEL is typical for kernel-internal allocations. The caller requires - * %ZONE_NORMAL or a lower zone for direct access but can direct reclaim. - * - * %GFP_KERNEL_ACCOUNT is the same as GFP_KERNEL, except the allocation is - * accounted to kmemcg. - * - * %GFP_NOWAIT is for kernel allocations that should not stall for direct - * reclaim, start physical IO or use any filesystem callback. - * - * %GFP_NOIO will use direct reclaim to discard clean pages or slab pages - * that do not require the starting of any physical IO. - * Please try to avoid using this flag directly and instead use - * memalloc_noio_{save,restore} to mark the whole scope which cannot - * perform any IO with a short explanation why. All allocation requests - * will inherit GFP_NOIO implicitly. - * - * %GFP_NOFS will use direct reclaim but will not use any filesystem interfaces. - * Please try to avoid using this flag directly and instead use - * memalloc_nofs_{save,restore} to mark the whole scope which cannot/shouldn't - * recurse into the FS layer with a short explanation why. All allocation - * requests will inherit GFP_NOFS implicitly. - * - * %GFP_USER is for userspace allocations that also need to be directly - * accessibly by the kernel or hardware. It is typically used by hardware - * for buffers that are mapped to userspace (e.g. graphics) that hardware - * still must DMA to. cpuset limits are enforced for these allocations. - * - * %GFP_DMA exists for historical reasons and should be avoided where possible. - * The flags indicates that the caller requires that the lowest zone be - * used (%ZONE_DMA or 16M on x86-64). Ideally, this would be removed but - * it would require careful auditing as some users really require it and - * others use the flag to avoid lowmem reserves in %ZONE_DMA and treat the - * lowest zone as a type of emergency reserve. - * - * %GFP_DMA32 is similar to %GFP_DMA except that the caller requires a 32-bit - * address. Note that kmalloc(..., GFP_DMA32) does not return DMA32 memory - * because the DMA32 kmalloc cache array is not implemented. - * (Reason: there is no such user in kernel). - * - * %GFP_HIGHUSER is for userspace allocations that may be mapped to userspace, - * do not need to be directly accessible by the kernel but that cannot - * move once in use. An example may be a hardware allocation that maps - * data directly into userspace but has no addressing limitations. - * - * %GFP_HIGHUSER_MOVABLE is for userspace allocations that the kernel does not - * need direct access to but can use kmap() when access is required. They - * are expected to be movable via page reclaim or page migration. Typically, - * pages on the LRU would also be allocated with %GFP_HIGHUSER_MOVABLE. - * - * %GFP_TRANSHUGE and %GFP_TRANSHUGE_LIGHT are used for THP allocations. They - * are compound allocations that will generally fail quickly if memory is not - * available and will not wake kswapd/kcompactd on failure. The _LIGHT - * version does not attempt reclaim/compaction at all and is by default used - * in page fault path, while the non-light is used by khugepaged. - */ -#define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM) -#define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS) -#define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT) -#define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) -#define GFP_NOIO (__GFP_RECLAIM) -#define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) -#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL) -#define GFP_DMA __GFP_DMA -#define GFP_DMA32 __GFP_DMA32 -#define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) -#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | \ - __GFP_SKIP_KASAN_POISON) -#define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \ - __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM) -#define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM) - /* Convert GFP flags to their corresponding migrate type */ #define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE) #define GFP_MOVABLE_SHIFT 3 diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h new file mode 100644 index 000000000000..06fc85cee23f --- /dev/null +++ b/include/linux/gfp_types.h @@ -0,0 +1,348 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_GFP_TYPES_H +#define __LINUX_GFP_TYPES_H + +/* The typedef is in types.h but we want the documentation here */ +#if 0 +/** + * typedef gfp_t - Memory allocation flags. + * + * GFP flags are commonly used throughout Linux to indicate how memory + * should be allocated. The GFP acronym stands for get_free_pages(), + * the underlying memory allocation function. Not every GFP flag is + * supported by every function which may allocate memory. Most users + * will want to use a plain ``GFP_KERNEL``. + */ +typedef unsigned int __bitwise gfp_t; +#endif + +/* + * In case of changes, please don't forget to update + * include/trace/events/mmflags.h and tools/perf/builtin-kmem.c + */ + +/* Plain integer GFP bitmasks. Do not use this directly. */ +#define ___GFP_DMA 0x01u +#define ___GFP_HIGHMEM 0x02u +#define ___GFP_DMA32 0x04u +#define ___GFP_MOVABLE 0x08u +#define ___GFP_RECLAIMABLE 0x10u +#define ___GFP_HIGH 0x20u +#define ___GFP_IO 0x40u +#define ___GFP_FS 0x80u +#define ___GFP_ZERO 0x100u +#define ___GFP_ATOMIC 0x200u +#define ___GFP_DIRECT_RECLAIM 0x400u +#define ___GFP_KSWAPD_RECLAIM 0x800u +#define ___GFP_WRITE 0x1000u +#define ___GFP_NOWARN 0x2000u +#define ___GFP_RETRY_MAYFAIL 0x4000u +#define ___GFP_NOFAIL 0x8000u +#define ___GFP_NORETRY 0x10000u +#define ___GFP_MEMALLOC 0x20000u +#define ___GFP_COMP 0x40000u +#define ___GFP_NOMEMALLOC 0x80000u +#define ___GFP_HARDWALL 0x100000u +#define ___GFP_THISNODE 0x200000u +#define ___GFP_ACCOUNT 0x400000u +#define ___GFP_ZEROTAGS 0x800000u +#ifdef CONFIG_KASAN_HW_TAGS +#define ___GFP_SKIP_ZERO 0x1000000u +#define ___GFP_SKIP_KASAN_UNPOISON 0x2000000u +#define ___GFP_SKIP_KASAN_POISON 0x4000000u +#else +#define ___GFP_SKIP_ZERO 0 +#define ___GFP_SKIP_KASAN_UNPOISON 0 +#define ___GFP_SKIP_KASAN_POISON 0 +#endif +#ifdef CONFIG_LOCKDEP +#define ___GFP_NOLOCKDEP 0x8000000u +#else +#define ___GFP_NOLOCKDEP 0 +#endif +/* If the above are modified, __GFP_BITS_SHIFT may need updating */ + +/* + * Physical address zone modifiers (see linux/mmzone.h - low four bits) + * + * Do not put any conditional on these. If necessary modify the definitions + * without the underscores and use them consistently. The definitions here may + * be used in bit comparisons. + */ +#define __GFP_DMA ((__force gfp_t)___GFP_DMA) +#define __GFP_HIGHMEM ((__force gfp_t)___GFP_HIGHMEM) +#define __GFP_DMA32 ((__force gfp_t)___GFP_DMA32) +#define __GFP_MOVABLE ((__force gfp_t)___GFP_MOVABLE) /* ZONE_MOVABLE allowed */ +#define GFP_ZONEMASK (__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE) + +/** + * DOC: Page mobility and placement hints + * + * Page mobility and placement hints + * --------------------------------- + * + * These flags provide hints about how mobile the page is. Pages with similar + * mobility are placed within the same pageblocks to minimise problems due + * to external fragmentation. + * + * %__GFP_MOVABLE (also a zone modifier) indicates that the page can be + * moved by page migration during memory compaction or can be reclaimed. + * + * %__GFP_RECLAIMABLE is used for slab allocations that specify + * SLAB_RECLAIM_ACCOUNT and whose pages can be freed via shrinkers. + * + * %__GFP_WRITE indicates the caller intends to dirty the page. Where possible, + * these pages will be spread between local zones to avoid all the dirty + * pages being in one zone (fair zone allocation policy). + * + * %__GFP_HARDWALL enforces the cpuset memory allocation policy. + * + * %__GFP_THISNODE forces the allocation to be satisfied from the requested + * node with no fallbacks or placement policy enforcements. + * + * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg. + */ +#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) +#define __GFP_WRITE ((__force gfp_t)___GFP_WRITE) +#define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL) +#define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE) +#define __GFP_ACCOUNT ((__force gfp_t)___GFP_ACCOUNT) + +/** + * DOC: Watermark modifiers + * + * Watermark modifiers -- controls access to emergency reserves + * ------------------------------------------------------------ + * + * %__GFP_HIGH indicates that the caller is high-priority and that granting + * the request is necessary before the system can make forward progress. + * For example, creating an IO context to clean pages. + * + * %__GFP_ATOMIC indicates that the caller cannot reclaim or sleep and is + * high priority. Users are typically interrupt handlers. This may be + * used in conjunction with %__GFP_HIGH + * + * %__GFP_MEMALLOC allows access to all memory. This should only be used when + * the caller guarantees the allocation will allow more memory to be freed + * very shortly e.g. process exiting or swapping. Users either should + * be the MM or co-ordinating closely with the VM (e.g. swap over NFS). + * Users of this flag have to be extremely careful to not deplete the reserve + * completely and implement a throttling mechanism which controls the + * consumption of the reserve based on the amount of freed memory. + * Usage of a pre-allocated pool (e.g. mempool) should be always considered + * before using this flag. + * + * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves. + * This takes precedence over the %__GFP_MEMALLOC flag if both are set. + */ +#define __GFP_ATOMIC ((__force gfp_t)___GFP_ATOMIC) +#define __GFP_HIGH ((__force gfp_t)___GFP_HIGH) +#define __GFP_MEMALLOC ((__force gfp_t)___GFP_MEMALLOC) +#define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) + +/** + * DOC: Reclaim modifiers + * + * Reclaim modifiers + * ----------------- + * Please note that all the following flags are only applicable to sleepable + * allocations (e.g. %GFP_NOWAIT and %GFP_ATOMIC will ignore them). + * + * %__GFP_IO can start physical IO. + * + * %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the + * allocator recursing into the filesystem which might already be holding + * locks. + * + * %__GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim. + * This flag can be cleared to avoid unnecessary delays when a fallback + * option is available. + * + * %__GFP_KSWAPD_RECLAIM indicates that the caller wants to wake kswapd when + * the low watermark is reached and have it reclaim pages until the high + * watermark is reached. A caller may wish to clear this flag when fallback + * options are available and the reclaim is likely to disrupt the system. The + * canonical example is THP allocation where a fallback is cheap but + * reclaim/compaction may cause indirect stalls. + * + * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim. + * + * The default allocator behavior depends on the request size. We have a concept + * of so called costly allocations (with order > %PAGE_ALLOC_COSTLY_ORDER). + * !costly allocations are too essential to fail so they are implicitly + * non-failing by default (with some exceptions like OOM victims might fail so + * the caller still has to check for failures) while costly requests try to be + * not disruptive and back off even without invoking the OOM killer. + * The following three modifiers might be used to override some of these + * implicit rules + * + * %__GFP_NORETRY: The VM implementation will try only very lightweight + * memory direct reclaim to get some memory under memory pressure (thus + * it can sleep). It will avoid disruptive actions like OOM killer. The + * caller must handle the failure which is quite likely to happen under + * heavy memory pressure. The flag is suitable when failure can easily be + * handled at small cost, such as reduced throughput + * + * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim + * procedures that have previously failed if there is some indication + * that progress has been made else where. It can wait for other + * tasks to attempt high level approaches to freeing memory such as + * compaction (which removes fragmentation) and page-out. + * There is still a definite limit to the number of retries, but it is + * a larger limit than with %__GFP_NORETRY. + * Allocations with this flag may fail, but only when there is + * genuinely little unused memory. While these allocations do not + * directly trigger the OOM killer, their failure indicates that + * the system is likely to need to use the OOM killer soon. The + * caller must handle failure, but can reasonably do so by failing + * a higher-level request, or completing it only in a much less + * efficient manner. + * If the allocation does fail, and the caller is in a position to + * free some non-essential memory, doing so could benefit the system + * as a whole. + * + * %__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller + * cannot handle allocation failures. The allocation could block + * indefinitely but will never return with failure. Testing for + * failure is pointless. + * New users should be evaluated carefully (and the flag should be + * used only when there is no reasonable failure policy) but it is + * definitely preferable to use the flag rather than opencode endless + * loop around allocator. + * Using this flag for costly allocations is _highly_ discouraged. + */ +#define __GFP_IO ((__force gfp_t)___GFP_IO) +#define __GFP_FS ((__force gfp_t)___GFP_FS) +#define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /* Caller can reclaim */ +#define __GFP_KSWAPD_RECLAIM ((__force gfp_t)___GFP_KSWAPD_RECLAIM) /* kswapd can wake */ +#define __GFP_RECLAIM ((__force gfp_t)(___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM)) +#define __GFP_RETRY_MAYFAIL ((__force gfp_t)___GFP_RETRY_MAYFAIL) +#define __GFP_NOFAIL ((__force gfp_t)___GFP_NOFAIL) +#define __GFP_NORETRY ((__force gfp_t)___GFP_NORETRY) + +/** + * DOC: Action modifiers + * + * Action modifiers + * ---------------- + * + * %__GFP_NOWARN suppresses allocation failure reports. + * + * %__GFP_COMP address compound page metadata. + * + * %__GFP_ZERO returns a zeroed page on success. + * + * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself + * is being zeroed (either via __GFP_ZERO or via init_on_alloc, provided that + * __GFP_SKIP_ZERO is not set). This flag is intended for optimization: setting + * memory tags at the same time as zeroing memory has minimal additional + * performace impact. + * + * %__GFP_SKIP_KASAN_UNPOISON makes KASAN skip unpoisoning on page allocation. + * Only effective in HW_TAGS mode. + * + * %__GFP_SKIP_KASAN_POISON makes KASAN skip poisoning on page deallocation. + * Typically, used for userspace pages. Only effective in HW_TAGS mode. + */ +#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) +#define __GFP_COMP ((__force gfp_t)___GFP_COMP) +#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO) +#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON) +#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) + +/* Disable lockdep for GFP context tracking */ +#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) + +/* Room for N __GFP_FOO bits */ +#define __GFP_BITS_SHIFT (27 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) + +/** + * DOC: Useful GFP flag combinations + * + * Useful GFP flag combinations + * ---------------------------- + * + * Useful GFP flag combinations that are commonly used. It is recommended + * that subsystems start with one of these combinations and then set/clear + * %__GFP_FOO flags as necessary. + * + * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower + * watermark is applied to allow access to "atomic reserves". + * The current implementation doesn't support NMI and few other strict + * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT. + * + * %GFP_KERNEL is typical for kernel-internal allocations. The caller requires + * %ZONE_NORMAL or a lower zone for direct access but can direct reclaim. + * + * %GFP_KERNEL_ACCOUNT is the same as GFP_KERNEL, except the allocation is + * accounted to kmemcg. + * + * %GFP_NOWAIT is for kernel allocations that should not stall for direct + * reclaim, start physical IO or use any filesystem callback. + * + * %GFP_NOIO will use direct reclaim to discard clean pages or slab pages + * that do not require the starting of any physical IO. + * Please try to avoid using this flag directly and instead use + * memalloc_noio_{save,restore} to mark the whole scope which cannot + * perform any IO with a short explanation why. All allocation requests + * will inherit GFP_NOIO implicitly. + * + * %GFP_NOFS will use direct reclaim but will not use any filesystem interfaces. + * Please try to avoid using this flag directly and instead use + * memalloc_nofs_{save,restore} to mark the whole scope which cannot/shouldn't + * recurse into the FS layer with a short explanation why. All allocation + * requests will inherit GFP_NOFS implicitly. + * + * %GFP_USER is for userspace allocations that also need to be directly + * accessibly by the kernel or hardware. It is typically used by hardware + * for buffers that are mapped to userspace (e.g. graphics) that hardware + * still must DMA to. cpuset limits are enforced for these allocations. + * + * %GFP_DMA exists for historical reasons and should be avoided where possible. + * The flags indicates that the caller requires that the lowest zone be + * used (%ZONE_DMA or 16M on x86-64). Ideally, this would be removed but + * it would require careful auditing as some users really require it and + * others use the flag to avoid lowmem reserves in %ZONE_DMA and treat the + * lowest zone as a type of emergency reserve. + * + * %GFP_DMA32 is similar to %GFP_DMA except that the caller requires a 32-bit + * address. Note that kmalloc(..., GFP_DMA32) does not return DMA32 memory + * because the DMA32 kmalloc cache array is not implemented. + * (Reason: there is no such user in kernel). + * + * %GFP_HIGHUSER is for userspace allocations that may be mapped to userspace, + * do not need to be directly accessible by the kernel but that cannot + * move once in use. An example may be a hardware allocation that maps + * data directly into userspace but has no addressing limitations. + * + * %GFP_HIGHUSER_MOVABLE is for userspace allocations that the kernel does not + * need direct access to but can use kmap() when access is required. They + * are expected to be movable via page reclaim or page migration. Typically, + * pages on the LRU would also be allocated with %GFP_HIGHUSER_MOVABLE. + * + * %GFP_TRANSHUGE and %GFP_TRANSHUGE_LIGHT are used for THP allocations. They + * are compound allocations that will generally fail quickly if memory is not + * available and will not wake kswapd/kcompactd on failure. The _LIGHT + * version does not attempt reclaim/compaction at all and is by default used + * in page fault path, while the non-light is used by khugepaged. + */ +#define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM) +#define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS) +#define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT) +#define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) +#define GFP_NOIO (__GFP_RECLAIM) +#define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) +#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL) +#define GFP_DMA __GFP_DMA +#define GFP_DMA32 __GFP_DMA32 +#define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) +#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | \ + __GFP_SKIP_KASAN_POISON) +#define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \ + __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM) +#define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM) + +#endif /* __LINUX_GFP_TYPES_H */ From patchwork Wed Jul 6 17:42:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A98DC43334 for ; Wed, 6 Jul 2022 18:00:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=p9fOR2yk0oV6THAsWCuWGCVDPITKg32wfMCieuawgX4=; b=ytkRUIN2Zz9WMI AUJ8g6ig3/lQ7BhLeLg60pAhBnTGcEBf7Ijvoj4+Lu0DAqx0I/e1V3K/Ax6fH+JWwX+vljXRSnXhK W1HC290ndRyqRB77N4hU3zeigfFZgnB3wZjPiLUikr17NyPbsfXx074HVsagDv2M9+XC9I14zPQLf k99uRe3Ogxiakd7SNbaaHkz63m6m1h0hZ5KyYQqLdN1QYZM8KftauJb1PZW0Jvz3O4MA2Uyvs9aFn Y7LWmHwiwNfYsoaxgGhiUlxMfWB5G4Sbfh3wGdCXgO14t/QonRWTH43T0LQkvxWjTFxWvx+5IQ+6b n5hfQLPiIzxuZ1rgiaHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o99JJ-00BsTA-MH; Wed, 06 Jul 2022 17:59:37 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o99JH-00BsQD-IO for linux-arm-kernel@bombadil.infradead.org; Wed, 06 Jul 2022 17:59:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RTX+2ZmhNLKv9X3CqjJGf6dksV/0DVIM74S01uC3qWA=; b=RKDb2wJDG9fA6dwBzSKfM0tXJD TJ6nPGmFYtlhhnmIrsWdvmQUUAiNV+votpTj3mPLQ/waF26il72a5DkYy8bl9l62pO8JBNpJYtOm2 yQrZwLfTyyREAM3E82+kXhQo+48sni7Fb8ZSVNC4lITbRvxfQK8jJxFfgZeDwDXj+6Gg1Cnl8otBQ 3N1ok85s4Per3xZ875frfGk5FKLNdvw9ZvyR0p9+tu7a9b4Psq4xzSNKqxHRD7tpCk43GEdd2O6HT 8iXsUUSQ2XYiWaQHQJwhJp7Wk+ybU/WlOj9A1q37/yeJJMgUkA1ZCaAmEUrb+XWr8RCVDba6NTsru EyV2rHgQ==; Received: from mail-qk1-x731.google.com ([2607:f8b0:4864:20::731]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993U-000u9G-Cm for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:22 +0000 Received: by mail-qk1-x731.google.com with SMTP id g1so11580105qkl.9 for ; Wed, 06 Jul 2022 10:43:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RTX+2ZmhNLKv9X3CqjJGf6dksV/0DVIM74S01uC3qWA=; b=UqIyAfWAmZmjeU+EenLlPE0VmOtNlBXkBvRtQFrmqf8vibXabUa6dSf+8yGO0vHnKp KDVdg7apxQvX4d1b8Ws9MgOGuWWx8V/RiOVVlGerPweObz2WTWdlCIU0lScSj3Z/zWed q86ogK3mNR3AWuWOAsbWQ/bfhMEX3ZWqlFhL1Cj1RyXz064GT0JeUAYIvGOqpnRpqwgI sL0MyAgDqQorU98niV7Gl6c5Yflqp4eoXbBJ3F21nx5ktSlXFQyJD430kkNgeoYhfc9N XW/wra8Tu8vQbBtgs3K5c+Mp9Xr+d8y5EozH1/A69cPUbirDxHP4lfMr/YOp/uIZVUgG oCqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RTX+2ZmhNLKv9X3CqjJGf6dksV/0DVIM74S01uC3qWA=; b=WjHObVq/6QIe2QiQMPP/X+rCIx0prVTsZnj6He6ykRLQQRUJfrS9ZMANJLStUCHWUA Bfo8pnSWEAXLSgtSg/dSBYE/ZK9tPZgZcXrGotDBvGTmjTzc3EVAX3IWL6B+Jft3Eu6v UnvZRPOcNNnkiG1VjhjAQmd+hm7X9JdcUfwWu4+h6P9rSTI/UiBgpZ6bRgf0hZtQLQWt rDkYAkxJBDsQhcMdokT7ejH/YI6JlxRAuHmtdOPKdqSeVlioWwmGwtf3Tul2rD2+Ljka fLUOlSP4drQ+xCNJEYh8pQAlpCq4cAKeyELdkziRo6HBL61fFSj9W8ym+mFPRKZ6g2VO lU1Q== X-Gm-Message-State: AJIora8n6OGAElJtbm8KvVYrBa1PtqCZCI0gDh8jl+zFs32zwGQo+n0W wSualKOvb3IxmVn1/ezmYQo= X-Google-Smtp-Source: AGRyM1tpnu+LAuLYoIyI9/LCG9g4Hwiof2ASLOuavcg0x5s9g9u1zNCfiYJuBjjfPsO8eZ6Udpuckg== X-Received: by 2002:a05:620a:1206:b0:6b5:1758:de95 with SMTP id u6-20020a05620a120600b006b51758de95mr2855134qkj.100.1657129387265; Wed, 06 Jul 2022 10:43:07 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.ar.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id f10-20020a05620a280a00b006a69d7f390csm30803845qkp.103.2022.07.06.10.43.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:43:06 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 09/10] headers/deps: mm: align MANITAINERS and Docs with new gfp.h structure Date: Wed, 6 Jul 2022 10:42:52 -0700 Message-Id: <20220706174253.4175492-10-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_184319_141782_BA3FDEF7 X-CRM114-Status: GOOD ( 12.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org After moving gfp types out of gfp.h, we have to align MAINTAINERS and Docs, to avoid warnings like this: >> include/linux/gfp.h:1: warning: 'Page mobility and placement hints' not found >> include/linux/gfp.h:1: warning: 'Watermark modifiers' not found >> include/linux/gfp.h:1: warning: 'Reclaim modifiers' not found >> include/linux/gfp.h:1: warning: 'Useful GFP flag combinations' not found Signed-off-by: Yury Norov --- Documentation/core-api/mm-api.rst | 8 ++++---- MAINTAINERS | 1 + 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index f5b2f92822c8..1ebcc6c3fafe 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -22,16 +22,16 @@ Memory Allocation Controls .. kernel-doc:: include/linux/gfp.h :internal: -.. kernel-doc:: include/linux/gfp.h +.. kernel-doc:: include/linux/gfp_types.h :doc: Page mobility and placement hints -.. kernel-doc:: include/linux/gfp.h +.. kernel-doc:: include/linux/gfp_types.h :doc: Watermark modifiers -.. kernel-doc:: include/linux/gfp.h +.. kernel-doc:: include/linux/gfp_types.h :doc: Reclaim modifiers -.. kernel-doc:: include/linux/gfp.h +.. kernel-doc:: include/linux/gfp_types.h :doc: Useful GFP flag combinations The Slab Cache diff --git a/MAINTAINERS b/MAINTAINERS index 3cf9842d9233..7c0b8f28aa25 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12850,6 +12850,7 @@ T: quilt https://ozlabs.org/~akpm/mmotm/ T: quilt https://ozlabs.org/~akpm/mmots/ T: git git://github.com/hnaz/linux-mm.git F: include/linux/gfp.h +F: include/linux/gfp_types.h F: include/linux/memory_hotplug.h F: include/linux/mm.h F: include/linux/mmzone.h From patchwork Wed Jul 6 17:42:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12908449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3347C43334 for ; Wed, 6 Jul 2022 18:00:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IgninNJnRkuPqaWJku4Ag8KmFZXSsN8tFFxDe2ZhbmE=; b=AquXC7UY2npupn dEa+DcFu1ux29Pd6IT71CdgD2xgY3JBTQSxl7VR8yi/vuL4Ft6eqszFDCJE2l3Agy5+Ldku8voZue d12CQXqzPTOn+VQb6GnXLpgSHtbJw+ZqPGCyAVT3U7clVYVON9kv8EF0I2infIZKODANcODvfHlxp DHsp0sFAkvfBBevNlYsa9NqVtt4Y1/Urpyt8XW4Hc74721g/TKRSUHZrmtjG6W8OSds9BWFceHmSc yasAVp3w64i5fv/Nelx7/fabDkIlixFucwzPyFhKnep8XxWxBWZ2VQHSurp+fK3/etRqCNkC72FSO q6XsAj9XZTs0fjmQ+MvQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o99JT-00BsVw-83; Wed, 06 Jul 2022 17:59:47 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o99JL-00BsQD-D7 for linux-arm-kernel@bombadil.infradead.org; Wed, 06 Jul 2022 17:59:39 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yfeDuyRYNoVK5OuNHYtPJI0+qGNG77BF8q+9jUECqCA=; b=G6bLR0M5l6Q5WraogSxELwnKrt bAHRa8xfjQJdUmJIeniIkd8bqVNjn2PVZJVJUz2PjIALVJXGlCMLs2Y1CciWTlTUi+h+7yBdcsDmM ODPSPmActpzC+BIHGCr4zOr1Yj/nr0aEN1vTv+qlW96gRXyWA52oynyEuJjCVJedzQKWcTnK99hzo FqvOuSr9pu2OTaN49zJQMFhAFe9Fr9Qo3BZJV1/vpC4WgsF6NVJTHB3cf87CEmT0a8jI+iZwsOgVj 5/q6bWGRksRTfhxknQzMIAIe7iN9HYGd5ioocvZN/oKb+N3xwnL1iFuVGIMn8QA5JMq7UrNMRC+Ts lzYj6YCg==; Received: from mail-qt1-x82f.google.com ([2607:f8b0:4864:20::82f]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o993V-000u9U-KQ for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 17:43:22 +0000 Received: by mail-qt1-x82f.google.com with SMTP id c13so19253169qtq.10 for ; Wed, 06 Jul 2022 10:43:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yfeDuyRYNoVK5OuNHYtPJI0+qGNG77BF8q+9jUECqCA=; b=k+hw5BFlf7mL7dQoYsqsOVto8IsXSzSQEDJE/tC5hTf1lqeb3BMECvvjJCW6bwePV1 ckL/HbfUQC5aWcsdRq9+Q49oKYYnRnxKhZvHOnI2vBvNzs1mcXZGcIe9/pY8KerEmTCE Cl3SwHrWiru7RFqlHB6bItSpCSENPIVaLiVrPMha3FjFswNagqp9KsYtv5Q7z57nlCHM Km4CpS4NBXzO4cGWIgxODzB3vaadiNCjFi7jBEHIV3XpOdlx2CtCrwUpn8ttbtT//C6l 7Xw3+indiZaDKs3WYOy7bUugUGAuZodOL+uvzYv1DU8OYoClITXJPknBdYJOjfWMqv1J gybw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yfeDuyRYNoVK5OuNHYtPJI0+qGNG77BF8q+9jUECqCA=; b=fkrr6NSpygUFzAPiD3CREIVSbC3nEKNaSfzjY3mCozoPRTUFs31dMrFa2HmuX+OxhH wWxYT0f3rSpSFnXWWbmYOIpm3sJhLxwZUgLkuea/NJuiEsDfnBNNFLXNX39ltOGzUFre gwbwoBcmsUqgS+EKjTn8G3rFTAHZj0CmzxQQ8u6UCIUD/GYJQEhvcKAfkJcRdBsn3L5Y CSza2A9wwCkYY8syVRXyEPxkQf1mhsgkp3oAznIgSiqaXksrpppsNyHHTsRBYKjafRvA 6C7k9fuSd5Lz7nrcZTUMBhxb8qXFM6EYopiHItGbMNVZYHcCUqtA2XNumOr/8x4AdhIQ y5JA== X-Gm-Message-State: AJIora+A7VmBOXRG443dc9dd4hp4WMEfbAYB5TW7tNX/yB709ZWPVMRl WFQ/I0ip/zmlGxwZjzHRZVU= X-Google-Smtp-Source: AGRyM1seazjmesyL9M4iNmZaGp7i7fIpXiCBQjdNbxLsOqkRBXmMNAquyaefluz7IpklOU2eeireYw== X-Received: by 2002:a05:622a:1983:b0:31b:f165:8538 with SMTP id u3-20020a05622a198300b0031bf1658538mr33409402qtc.358.1657129388361; Wed, 06 Jul 2022 10:43:08 -0700 (PDT) Received: from localhost (c-69-254-185-160.hsd1.ar.comcast.net. [69.254.185.160]) by smtp.gmail.com with ESMTPSA id v18-20020a05620a441200b006a701d8a43bsm24772765qkp.79.2022.07.06.10.43.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 10:43:08 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Ingo Molnar , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 10/10] lib/cpumask: move some one-line wrappers to header file Date: Wed, 6 Jul 2022 10:42:53 -0700 Message-Id: <20220706174253.4175492-11-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220706174253.4175492-1-yury.norov@gmail.com> References: <20220706174253.4175492-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_184321_215247_BC793404 X-CRM114-Status: GOOD ( 14.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org After moving gfp flags to a separate header, it's possible to move some cpumask allocators into headers, and avoid creating real functions. Signed-off-by: Yury Norov --- include/linux/cpumask.h | 34 +++++++++++++++++++++++++++++++--- lib/cpumask.c | 28 ---------------------------- 2 files changed, 31 insertions(+), 31 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index ea3de2c2c180..80627362c774 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -12,6 +12,8 @@ #include #include #include +#include +#include /* Don't assign or return these: may not be this big! */ typedef struct cpumask { DECLARE_BITMAP(bits, NR_CPUS); } cpumask_t; @@ -794,9 +796,35 @@ typedef struct cpumask *cpumask_var_t; #define __cpumask_var_read_mostly __read_mostly bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node); -bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags); -bool zalloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node); -bool zalloc_cpumask_var(cpumask_var_t *mask, gfp_t flags); + +static inline +bool zalloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node) +{ + return alloc_cpumask_var_node(mask, flags | __GFP_ZERO, node); +} + +/** + * alloc_cpumask_var - allocate a struct cpumask + * @mask: pointer to cpumask_var_t where the cpumask is returned + * @flags: GFP_ flags + * + * Only defined when CONFIG_CPUMASK_OFFSTACK=y, otherwise is + * a nop returning a constant 1 (in ). + * + * See alloc_cpumask_var_node. + */ +static inline +bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags) +{ + return alloc_cpumask_var_node(mask, flags, NUMA_NO_NODE); +} + +static inline +bool zalloc_cpumask_var(cpumask_var_t *mask, gfp_t flags) +{ + return alloc_cpumask_var(mask, flags | __GFP_ZERO); +} + void alloc_bootmem_cpumask_var(cpumask_var_t *mask); void free_cpumask_var(cpumask_var_t mask); void free_bootmem_cpumask_var(cpumask_var_t mask); diff --git a/lib/cpumask.c b/lib/cpumask.c index cb7262ff8633..f0ae119be8c4 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -70,34 +70,6 @@ bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node) } EXPORT_SYMBOL(alloc_cpumask_var_node); -bool zalloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node) -{ - return alloc_cpumask_var_node(mask, flags | __GFP_ZERO, node); -} -EXPORT_SYMBOL(zalloc_cpumask_var_node); - -/** - * alloc_cpumask_var - allocate a struct cpumask - * @mask: pointer to cpumask_var_t where the cpumask is returned - * @flags: GFP_ flags - * - * Only defined when CONFIG_CPUMASK_OFFSTACK=y, otherwise is - * a nop returning a constant 1 (in ). - * - * See alloc_cpumask_var_node. - */ -bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags) -{ - return alloc_cpumask_var_node(mask, flags, NUMA_NO_NODE); -} -EXPORT_SYMBOL(alloc_cpumask_var); - -bool zalloc_cpumask_var(cpumask_var_t *mask, gfp_t flags) -{ - return alloc_cpumask_var(mask, flags | __GFP_ZERO); -} -EXPORT_SYMBOL(zalloc_cpumask_var); - /** * alloc_bootmem_cpumask_var - allocate a struct cpumask from the bootmem arena. * @mask: pointer to cpumask_var_t where the cpumask is returned