From patchwork Mon Apr 2 22:58:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 10320511 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A0DD360116 for ; Mon, 2 Apr 2018 22:58:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97B9128912 for ; Mon, 2 Apr 2018 22:58:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C40E28931; Mon, 2 Apr 2018 22:58:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 298AB28912 for ; Mon, 2 Apr 2018 22:58:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754592AbeDBW6l (ORCPT ); Mon, 2 Apr 2018 18:58:41 -0400 Received: from mail-pl0-f65.google.com ([209.85.160.65]:46097 "EHLO mail-pl0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754550AbeDBW6k (ORCPT ); Mon, 2 Apr 2018 18:58:40 -0400 Received: by mail-pl0-f65.google.com with SMTP id 59-v6so3460169plc.13 for ; Mon, 02 Apr 2018 15:58:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=uXV/UO/J5w+gD+WT7iAI24hRt1AcR5j6wtXnOln4KNA=; b=mkVRM4hpXEuWad51bH6Px6MuuX9Lk9l70y2y7cFQIbg27a3vaPHxXjpGfMWAXjmBPY 8nbOQq5ezbACO9Vi7Q68ZdegXjzKxc9II6UMQYdktk5PxPSsrcwIvTiiuzZnJDrEk5in gXdnglwSaO2BprFok+P9xjneQRsrgaZg8VV2FCcK9qp/Csst7tK3k1cx6iMHNDrkc1pT P4I5f/14zeFD0CjA4SEfckr6mVjAr7Dgl/485usEGarxIRoPSbBhc+OmcBfOK/G6KfeJ Eqcu1X4VfF3/gmFgHv6djLiutXe5G/SErRe3wKMs8DSMmUV1iKyY6raQl8BorhNOLHod CcHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=uXV/UO/J5w+gD+WT7iAI24hRt1AcR5j6wtXnOln4KNA=; b=FzyUYQ4THs9i1gQBRaQxoaK0N7e5l/njeqbe8tukbctbP9OXzWS9eGGQIR/N2M2nXF FxTR3S6qVyzjksX38Aoj3MqV22lgngEHuBVTJ2FBncG8QCJFVLFGpsXR4cgRjj7wPHkZ VuGZZTodiesriuueC9617V0cdcwF5qaOBEUstVG4a2vrfvdmKFHKqlaAd3SrdP34RMnM AhHWG1iQA2+BIsEMlgVmwQvmWovkkOeQvu3kXUsEcYw4taQz9VQBUYzZKj6Pjm3cv8XH GynsWsjWF6Ig/CM6nKKH9IEdgShor9gjHzK94bOztxDGpSeKSUJue8xK9a9q78OhAvGI lcGQ== X-Gm-Message-State: AElRT7GbJm/NBm75Ko7ojE4cB3PTY8As8j96+6+MN4raylrM/6g9XF/n IERjoaL6pHJCVfThDUOgke/K6huEYTE= X-Google-Smtp-Source: AIpwx4/b6L016KxVZGlFb8sLtNBleTsvydaEJKNDoIQOzx/L5tWhY+429FydKx87e3XibIaKEXrYMQ== X-Received: by 10.98.211.211 with SMTP id z80mr8580125pfk.16.1522709919360; Mon, 02 Apr 2018 15:58:39 -0700 (PDT) Received: from vader.thefacebook.com ([2620:10d:c090:200::6:4b00]) by smtp.gmail.com with ESMTPSA id t65sm2550694pfe.174.2018.04.02.15.58.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Apr 2018 15:58:38 -0700 (PDT) From: Omar Sandoval To: linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org Cc: kernel-team@fb.com, Matthew Wilcox , Andrew Morton , Rasmus Villemoes , Linus Torvalds , stable@kernel.org Subject: [PATCH] bitmap: fix memset optimization on big-endian systems Date: Mon, 2 Apr 2018 15:58:31 -0700 Message-Id: <817147544aa3ecc2b78d6cadeab713869d8805e6.1522709616.git.osandov@fb.com> X-Mailer: git-send-email 2.16.3 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval Commit 2a98dc028f91 introduced an optimization to bitmap_{set,clear}() which uses memset() when the start and length are constants aligned to a byte. This is wrong on big-endian systems; our bitmaps are arrays of unsigned long, so bit n is not at byte n / 8 in memory. This was caught by the Btrfs selftests, but the bitmap selftests also fail when run on a big-endian machine. We can still use memset if the start and length are aligned to an unsigned long, so do that on big-endian. The same problem applies to the memcmp in bitmap_equal(), so fix it there, too. Fixes: 2a98dc028f91 ("include/linux/bitmap.h: turn bitmap_set and bitmap_clear into memset when possible") Fixes: 2c6deb01525a ("bitmap: use memcmp optimisation in more situations") Cc: stable@kernel.org Reported-by: "Erhard F." Signed-off-by: Omar Sandoval --- include/linux/bitmap.h | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 5f11fbdc27f8..1ee46f492267 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -302,12 +302,20 @@ static inline void bitmap_complement(unsigned long *dst, const unsigned long *sr __bitmap_complement(dst, src, nbits); } +#ifdef __LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) + static inline int bitmap_equal(const unsigned long *src1, const unsigned long *src2, unsigned int nbits) { if (small_const_nbits(nbits)) return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); - if (__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8)) + if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) return !memcmp(src1, src2, nbits / 8); return __bitmap_equal(src1, src2, nbits); } @@ -358,8 +366,10 @@ static __always_inline void bitmap_set(unsigned long *map, unsigned int start, { if (__builtin_constant_p(nbits) && nbits == 1) __set_bit(start, map); - else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) && - __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8)) + else if (__builtin_constant_p(start & BITMAP_MEM_MASK) && + IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) && + __builtin_constant_p(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) memset((char *)map + start / 8, 0xff, nbits / 8); else __bitmap_set(map, start, nbits); @@ -370,8 +380,10 @@ static __always_inline void bitmap_clear(unsigned long *map, unsigned int start, { if (__builtin_constant_p(nbits) && nbits == 1) __clear_bit(start, map); - else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) && - __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8)) + else if (__builtin_constant_p(start & BITMAP_MEM_MASK) && + IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) && + __builtin_constant_p(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) memset((char *)map + start / 8, 0, nbits / 8); else __bitmap_clear(map, start, nbits);