From patchwork Sun Nov 28 03:56:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12642739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5B6FC433F5 for ; Sun, 28 Nov 2021 03:59:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348797AbhK1ECa (ORCPT ); Sat, 27 Nov 2021 23:02:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344053AbhK1EAZ (ORCPT ); Sat, 27 Nov 2021 23:00:25 -0500 Received: from mail-qt1-x82e.google.com (mail-qt1-x82e.google.com [IPv6:2607:f8b0:4864:20::82e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0EA1C06175A; Sat, 27 Nov 2021 19:57:09 -0800 (PST) Received: by mail-qt1-x82e.google.com with SMTP id o17so12925801qtk.1; Sat, 27 Nov 2021 19:57:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=oyYw0S0uIYbzbJlT2sTRB5/AL9ZCZW5naP5e066GZEw=; b=pRmYH3DF71YxuUIeQtUZruUdNNbsy9VbRav7B36MJevKZvkrVj0n380VwT9ymC/6tG I3jRVCazSdBphHgxopQgwYWsemberSxSLLmEZ3tbZZbD8KpHpZflSpa+2S0k6f6QnB63 LB8J/U92ggXSM5heSnBWXElO/lbm0h3GvHDm+xfKfMbvFUiCUQhf7pfNhI/oaY9IVP8w 7t+nf549TZ3BfT1eSpUgpkQbcUE7bqe1pWWQUMdmdt6c2ZEHLOYC5EdviqL4d6Gm0jRf 3M/+zB1qnWTzYKLjhE16NC3G38+E6HR1NvaA5CMDVSJ3aiJNz0XRVEXMB3VTCE4t50qm Oo5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oyYw0S0uIYbzbJlT2sTRB5/AL9ZCZW5naP5e066GZEw=; b=asRwVMb1UBf0/n8FkTOXeRZo1CXyPGb6NloGP1eBWaTwY3pFz1Liym4sdQoQ7NQb46 cOvZNMxfNWZomB/lvSW4HeODrcU+svdFwC6Z0ORpDr7ptEuyn4SDHIZdTi2SZ7XVRg4G 7A1589ZEYt8DS6edtQAi1gKmmci34XWy4hw0QPFx1uZksqheu2GMpUERobZajeWV1sDd ic9CJUUN5xX2UBoq83Qj05coI8ErFxy5Pctw/UgurddDymE8/5Ss0xNI683cXkMfGOlG Eb6CELIiFG2dZXMJV0P81NQElk5dMNyjmlcol2dhNW1+PGu+jTakTzd9qWK2yP9OlPiA pKUQ== X-Gm-Message-State: AOAM532c6uUpow4zBl8VYRi/18WHan4XpKjCMTgmLSUia+5hl+MoSw6y HeJd7eGppqlhC5u65TU+oJHTp2APHI/zdA== X-Google-Smtp-Source: ABdhPJzCSIHK5gqoZpz/7aPIEpGhuYe/wtGfXQ5VTkZ0R5LJuGRI9ufln+7naenMGGb1nvoOREIAZQ== X-Received: by 2002:ac8:7fca:: with SMTP id b10mr26533825qtk.500.1638071828690; Sat, 27 Nov 2021 19:57:08 -0800 (PST) Received: from localhost ([66.216.211.25]) by smtp.gmail.com with ESMTPSA id v1sm6498979qtw.65.2021.11.27.19.57.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 27 Nov 2021 19:57:08 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Yury Norov , "James E.J. Bottomley" , "Martin K. Petersen" , "Paul E. McKenney" , "Rafael J. Wysocki" , Alexander Shishkin , Alexey Klimov , Amitkumar Karwar , Andi Kleen , Andrew Lunn , Andrew Morton , Andy Gross , Andy Lutomirski , Andy Shevchenko , Anup Patel , Ard Biesheuvel , Arnaldo Carvalho de Melo , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Christoph Lameter , Daniel Vetter , Dave Hansen , David Airlie , David Laight , Dennis Zhou , Dinh Nguyen , Geetha sowjanya , Geert Uytterhoeven , Greg Kroah-Hartman , Guo Ren , Hans de Goede , Heiko Carstens , Ian Rogers , Ingo Molnar , Jakub Kicinski , Jason Wessel , Jens Axboe , Jiri Olsa , Jonathan Cameron , Juri Lelli , Kalle Valo , Kees Cook , Krzysztof Kozlowski , Lee Jones , Marc Zyngier , Marcin Wojtas , Mark Gross , Mark Rutland , Matti Vaittinen , Mauro Carvalho Chehab , Mel Gorman , Michael Ellerman , Mike Marciniszyn , Nicholas Piggin , Palmer Dabbelt , Peter Zijlstra , Petr Mladek , Randy Dunlap , Rasmus Villemoes , Roy Pledge , Russell King , Saeed Mahameed , Sagi Grimberg , Sergey Senozhatsky , Solomon Peachy , Stephen Boyd , Stephen Rothwell , Steven Rostedt , Subbaraya Sundeep , Sudeep Holla , Sunil Goutham , Tariq Toukan , Tejun Heo , Thomas Bogendoerfer , Thomas Gleixner , Ulf Hansson , Vincent Guittot , Vineet Gupta , Viresh Kumar , Vivien Didelot , Vlastimil Babka , Will Deacon , bcm-kernel-feedback-list@broadcom.com, kvm@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-csky@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-snps-arc@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH 1/9] lib/bitmap: add bitmap_weight_{eq,gt,le} Date: Sat, 27 Nov 2021 19:56:56 -0800 Message-Id: <20211128035704.270739-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211128035704.270739-1-yury.norov@gmail.com> References: <20211128035704.270739-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Many kernel users call bitmap_weight() to compare the result against some number or expression: if (bitmap_weight(...) > 1) do_something(); It works OK, but may be significantly improved for large bitmaps: if first few words count set bits to a number greater than given, we can stop counting and immediately return. The same idea would work in other direction: if we know that the number of set bits that we counted so far is small enough, so that it would be smaller than required number even if all bits of the rest of the bitmap are set, we can return earlier. This patch adds new bitmap_weight_{eq,gt,le} functions to allow this optimization, and the following patches apply them where appropriate. Signed-off-by: Yury Norov --- include/linux/bitmap.h | 33 ++++++++++++++++++++++ lib/bitmap.c | 63 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 96 insertions(+) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 7dba0847510c..996041f771c8 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -51,6 +51,9 @@ struct device; * bitmap_empty(src, nbits) Are all bits zero in *src? * bitmap_full(src, nbits) Are all bits set in *src? * bitmap_weight(src, nbits) Hamming Weight: number set bits + * bitmap_weight_eq(src, nbits, num) Hamming Weight is equal to num + * bitmap_weight_gt(src, nbits, num) Hamming Weight is greater than num + * bitmap_weight_le(src, nbits, num) Hamming Weight is less than num * bitmap_set(dst, pos, nbits) Set specified bit area * bitmap_clear(dst, pos, nbits) Clear specified bit area * bitmap_find_next_zero_area(buf, len, pos, n, mask) Find bit free area @@ -162,6 +165,9 @@ int __bitmap_intersects(const unsigned long *bitmap1, int __bitmap_subset(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); int __bitmap_weight(const unsigned long *bitmap, unsigned int nbits); +bool __bitmap_weight_eq(const unsigned long *bitmap, unsigned int nbits, unsigned int num); +bool __bitmap_weight_gt(const unsigned long *bitmap, unsigned int nbits, unsigned int num); +bool __bitmap_weight_le(const unsigned long *bitmap, unsigned int nbits, unsigned int num); void __bitmap_set(unsigned long *map, unsigned int start, int len); void __bitmap_clear(unsigned long *map, unsigned int start, int len); @@ -403,6 +409,33 @@ static __always_inline int bitmap_weight(const unsigned long *src, unsigned int return __bitmap_weight(src, nbits); } +static __always_inline bool bitmap_weight_eq(const unsigned long *src, + unsigned int nbits, unsigned int num) +{ + if (small_const_nbits(nbits)) + return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits)) == num; + + return __bitmap_weight_eq(src, nbits, num); +} + +static __always_inline bool bitmap_weight_gt(const unsigned long *src, + unsigned int nbits, unsigned int num) +{ + if (small_const_nbits(nbits)) + return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits)) > num; + + return __bitmap_weight_gt(src, nbits, num); +} + +static __always_inline bool bitmap_weight_le(const unsigned long *src, + unsigned int nbits, unsigned int num) +{ + if (small_const_nbits(nbits)) + return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits)) < num; + + return __bitmap_weight_le(src, nbits, num); +} + static __always_inline void bitmap_set(unsigned long *map, unsigned int start, unsigned int nbits) { diff --git a/lib/bitmap.c b/lib/bitmap.c index 926408883456..72e7ab2d7bdd 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -348,6 +348,69 @@ int __bitmap_weight(const unsigned long *bitmap, unsigned int bits) } EXPORT_SYMBOL(__bitmap_weight); +bool __bitmap_weight_eq(const unsigned long *bitmap, unsigned int bits, unsigned int num) +{ + unsigned int k, w, lim = bits / BITS_PER_LONG; + + for (k = 0, w = 0; k < lim; k++) { + if (w + bits - k * BITS_PER_LONG < num) + return false; + + w += hweight_long(bitmap[k]); + + if (w > num) + return false; + } + + if (bits % BITS_PER_LONG) + w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits)); + + return w == num; +} +EXPORT_SYMBOL(__bitmap_weight_eq); + +bool __bitmap_weight_gt(const unsigned long *bitmap, unsigned int bits, unsigned int num) +{ + unsigned int k, w, lim = bits / BITS_PER_LONG; + + for (k = 0, w = 0; k < lim; k++) { + if (w + bits - k * BITS_PER_LONG <= num) + return false; + + w += hweight_long(bitmap[k]); + + if (w > num) + return true; + } + + if (bits % BITS_PER_LONG) + w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits)); + + return w > num; +} +EXPORT_SYMBOL(__bitmap_weight_gt); + +bool __bitmap_weight_le(const unsigned long *bitmap, unsigned int bits, unsigned int num) +{ + unsigned int k, w, lim = bits / BITS_PER_LONG; + + for (k = 0, w = 0; k < lim; k++) { + if (w + bits - k * BITS_PER_LONG < num) + return true; + + w += hweight_long(bitmap[k]); + + if (w >= num) + return false; + } + + if (bits % BITS_PER_LONG) + w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits)); + + return w < num; +} +EXPORT_SYMBOL(__bitmap_weight_le); + void __bitmap_set(unsigned long *map, unsigned int start, int len) { unsigned long *p = map + BIT_WORD(start);