@@ -1,10 +1,9 @@
#ifndef _LINUX_BITOPS_H
#define _LINUX_BITOPS_H
+#include <linux/const.h>
#include <asm/types.h>
#ifdef __KERNEL__
-#define BIT(nr) (1UL << (nr))
-#define BIT_ULL(nr) (1ULL << (nr))
#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
#define BIT_ULL_MASK(nr) (1ULL << ((nr) % BITS_PER_LONG_LONG))
@@ -6,4 +6,7 @@
#define UL(x) (_UL(x))
#define ULL(x) (_ULL(x))
+#define BIT(x) (_BITUL(x))
+#define BIT_ULL(x) (_BITULL(x))
+
#endif /* _LINUX_CONST_H */
Commit 2fc016c5bd8a ("linux/const.h: Add _BITUL() and _BITULL()") introduced _BITUL() and _BITULL(). Its git-log says the difference from the already existing BIT() are: 1. The namespace is such that they can be used in uapi definitions. 2. The type is set with the _AC() macro to allow it to be used in assembly. 3. The type is explicitly specified to be UL or ULL. However, I found _BITUL() is often used for "2. use in assembly", while "1. use in uapi" is unneeded. If we address only "2.", we can improve the existing BIT() for that. It will allow us to replace many _BITUL() instances with BIT(), i.e. avoid needless use of underscore-prefixed macros, in the end, for better de-couple of userspace/kernel headers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> --- Changes in v2: None include/linux/bitops.h | 3 +-- include/linux/const.h | 3 +++ 2 files changed, 4 insertions(+), 2 deletions(-)