Message ID | 1351450948-15618-1-git-send-email-levinsasha928@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
* Sasha Levin (levinsasha928@gmail.com) wrote: > This hashtable implementation is using hlist buckets to provide a simple > hashtable to prevent it from getting reimplemented all over the kernel. > > Signed-off-by: Sasha Levin <levinsasha928@gmail.com> > --- > > Sorry for the long delay, I was busy with a bunch of personal things. > > Changes since v6: > > - Use macros that point to internal static inline functions instead of > implementing everything as a macro. > - Rebase on latest -next. > - Resending the enter patch series on request. > - Break early from hash_empty() if found to be non-empty. > - DECLARE_HASHTABLE/DEFINE_HASHTABLE. > > > include/linux/hashtable.h | 193 ++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 193 insertions(+) > create mode 100644 include/linux/hashtable.h > > diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h > new file mode 100644 > index 0000000..1fb8c97 > --- /dev/null > +++ b/include/linux/hashtable.h > @@ -0,0 +1,193 @@ > +/* > + * Statically sized hash table implementation > + * (C) 2012 Sasha Levin <levinsasha928@gmail.com> > + */ > + > +#ifndef _LINUX_HASHTABLE_H > +#define _LINUX_HASHTABLE_H > + > +#include <linux/list.h> > +#include <linux/types.h> > +#include <linux/kernel.h> > +#include <linux/hash.h> > +#include <linux/rculist.h> > + > +#define DEFINE_HASHTABLE(name, bits) \ > + struct hlist_head name[1 << bits] = \ > + { [0 ... ((1 << bits) - 1)] = HLIST_HEAD_INIT } Although it's unlikely that someone would use this with a binary operator with lower precedence than "<<" (see e.g. http://www.swansontec.com/sopc.html) as "bits", lack of parenthesis around "bits" would be unexpected by the caller, and could introduce bugs. Please review all macros with the precedence table in mind, and ask yourself if lack of parenthesis could introduce a subtle bug. > + > +#define DECLARE_HASHTABLE(name, bits) \ > + struct hlist_head name[1 << (bits)] Here, you have parenthesis around "bits", but not above (inconsistency). > + > +#define HASH_SIZE(name) (ARRAY_SIZE(name)) > +#define HASH_BITS(name) ilog2(HASH_SIZE(name)) > + > +/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */ > +#define hash_min(val, bits) \ > +({ \ > + sizeof(val) <= 4 ? \ > + hash_32(val, bits) : \ > + hash_long(val, bits); \ > +}) > + > +static inline void __hash_init(struct hlist_head *ht, int sz) int -> unsigned int. > +{ > + int i; int -> unsigned int. > + > + for (i = 0; i < sz; i++) > + INIT_HLIST_HEAD(&ht[sz]); ouch. How did this work ? Has it been tested at all ? sz -> i > +} > + > +/** > + * hash_init - initialize a hash table > + * @hashtable: hashtable to be initialized > + * > + * Calculates the size of the hashtable from the given parameter, otherwise > + * same as hash_init_size. > + * > + * This has to be a macro since HASH_BITS() will not work on pointers since > + * it calculates the size during preprocessing. > + */ > +#define hash_init(hashtable) __hash_init(hashtable, HASH_SIZE(hashtable)) > + > +/** > + * hash_add - add an object to a hashtable > + * @hashtable: hashtable to add to > + * @node: the &struct hlist_node of the object to be added > + * @key: the key of the object to be added > + */ > +#define hash_add(hashtable, node, key) \ > + hlist_add_head(node, &hashtable[hash_min(key, HASH_BITS(hashtable))]); extra ";" at the end to remove. > + > +/** > + * hash_add_rcu - add an object to a rcu enabled hashtable > + * @hashtable: hashtable to add to > + * @node: the &struct hlist_node of the object to be added > + * @key: the key of the object to be added > + */ > +#define hash_add_rcu(hashtable, node, key) \ > + hlist_add_head_rcu(node, &hashtable[hash_min(key, HASH_BITS(hashtable))]); extra ";" at the end to remove. > + > +/** > + * hash_hashed - check whether an object is in any hashtable > + * @node: the &struct hlist_node of the object to be checked > + */ > +#define hash_hashed(node) (!hlist_unhashed(node)) Please use a static inline for this instead of a macro. > + > +static inline bool __hash_empty(struct hlist_head *ht, int sz) int -> unsigned int. > +{ > + int i; int -> unsigned int. > + > + for (i = 0; i < sz; i++) > + if (!hlist_empty(&ht[i])) > + return false; > + > + return true; > +} > + > +/** > + * hash_empty - check whether a hashtable is empty > + * @hashtable: hashtable to check > + * > + * This has to be a macro since HASH_BITS() will not work on pointers since > + * it calculates the size during preprocessing. > + */ > +#define hash_empty(hashtable) __hash_empty(hashtable, HASH_SIZE(hashtable)) > + > +/** > + * hash_del - remove an object from a hashtable > + * @node: &struct hlist_node of the object to remove > + */ > +static inline void hash_del(struct hlist_node *node) > +{ > + hlist_del_init(node); > +} > + > +/** > + * hash_del_rcu - remove an object from a rcu enabled hashtable > + * @node: &struct hlist_node of the object to remove > + */ > +static inline void hash_del_rcu(struct hlist_node *node) > +{ > + hlist_del_init_rcu(node); > +} > + > +/** > + * hash_for_each - iterate over a hashtable > + * @name: hashtable to iterate > + * @bkt: integer to use as bucket loop cursor > + * @node: the &struct list_head to use as a loop cursor for each entry > + * @obj: the type * to use as a loop cursor for each entry > + * @member: the name of the hlist_node within the struct > + */ > +#define hash_for_each(name, bkt, node, obj, member) \ > + for (bkt = 0, node = NULL; node == NULL && bkt < HASH_SIZE(name); bkt++)\ if "bkt" happens to be a dereferenced pointer (unary operator '*'), we get into a situation where "*blah" has higher precedence than "=", higher than "<", but lower than "++". Any thoughts on fixing this ? > + hlist_for_each_entry(obj, node, &name[bkt], member) > + > +/** > + * hash_for_each_rcu - iterate over a rcu enabled hashtable > + * @name: hashtable to iterate > + * @bkt: integer to use as bucket loop cursor > + * @node: the &struct list_head to use as a loop cursor for each entry > + * @obj: the type * to use as a loop cursor for each entry > + * @member: the name of the hlist_node within the struct > + */ > +#define hash_for_each_rcu(name, bkt, node, obj, member) \ > + for (bkt = 0, node = NULL; node == NULL && bkt < HASH_SIZE(name); bkt++)\ Same comment as above about "bkt". > + hlist_for_each_entry_rcu(obj, node, &name[bkt], member) > + > +/** > + * hash_for_each_safe - iterate over a hashtable safe against removal of > + * hash entry > + * @name: hashtable to iterate > + * @bkt: integer to use as bucket loop cursor > + * @node: the &struct list_head to use as a loop cursor for each entry > + * @tmp: a &struct used for temporary storage > + * @obj: the type * to use as a loop cursor for each entry > + * @member: the name of the hlist_node within the struct > + */ > +#define hash_for_each_safe(name, bkt, node, tmp, obj, member) \ > + for (bkt = 0, node = NULL; node == NULL && bkt < HASH_SIZE(name); bkt++)\ Same comment as above about "bkt". Thanks, Mathieu > + hlist_for_each_entry_safe(obj, node, tmp, &name[bkt], member) > + > +/** > + * hash_for_each_possible - iterate over all possible objects hashing to the > + * same bucket > + * @name: hashtable to iterate > + * @obj: the type * to use as a loop cursor for each entry > + * @node: the &struct list_head to use as a loop cursor for each entry > + * @member: the name of the hlist_node within the struct > + * @key: the key of the objects to iterate over > + */ > +#define hash_for_each_possible(name, obj, node, member, key) \ > + hlist_for_each_entry(obj, node, &name[hash_min(key, HASH_BITS(name))], member) > + > +/** > + * hash_for_each_possible_rcu - iterate over all possible objects hashing to the > + * same bucket in an rcu enabled hashtable > + * in a rcu enabled hashtable > + * @name: hashtable to iterate > + * @obj: the type * to use as a loop cursor for each entry > + * @node: the &struct list_head to use as a loop cursor for each entry > + * @member: the name of the hlist_node within the struct > + * @key: the key of the objects to iterate over > + */ > +#define hash_for_each_possible_rcu(name, obj, node, member, key) \ > + hlist_for_each_entry_rcu(obj, node, &name[hash_min(key, HASH_BITS(name))], member) > + > +/** > + * hash_for_each_possible_safe - iterate over all possible objects hashing to the > + * same bucket safe against removals > + * @name: hashtable to iterate > + * @obj: the type * to use as a loop cursor for each entry > + * @node: the &struct list_head to use as a loop cursor for each entry > + * @tmp: a &struct used for temporary storage > + * @member: the name of the hlist_node within the struct > + * @key: the key of the objects to iterate over > + */ > +#define hash_for_each_possible_safe(name, obj, node, tmp, member, key) \ > + hlist_for_each_entry_safe(obj, node, tmp, \ > + &name[hash_min(key, HASH_BITS(name))], member) > + > + > +#endif > -- > 1.7.12.4 >
On Mon, Oct 29, 2012 at 7:29 AM, Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote: > * Sasha Levin (levinsasha928@gmail.com) wrote: >> + >> + for (i = 0; i < sz; i++) >> + INIT_HLIST_HEAD(&ht[sz]); > > ouch. How did this work ? Has it been tested at all ? > > sz -> i Funny enough, it works perfectly. Generally as a test I boot the kernel in a VM and let it fuzz with trinity for a bit, doing that with the code above worked flawlessly. While it works, it's obviously wrong. Why does it work though? Usually there's a list op happening pretty soon after that which brings the list into proper state. I've been playing with a patch that adds a magic value into list_head if CONFIG_DEBUG_LIST is set, and checks that magic in the list debug code in lib/list_debug.c. Does it sound like something useful? If so I'll send that patch out. Thanks, Sasha -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
* Sasha Levin (levinsasha928@gmail.com) wrote: > On Mon, Oct 29, 2012 at 7:29 AM, Mathieu Desnoyers > <mathieu.desnoyers@efficios.com> wrote: > > * Sasha Levin (levinsasha928@gmail.com) wrote: > >> + > >> + for (i = 0; i < sz; i++) > >> + INIT_HLIST_HEAD(&ht[sz]); > > > > ouch. How did this work ? Has it been tested at all ? > > > > sz -> i > > Funny enough, it works perfectly. Generally as a test I boot the > kernel in a VM and let it fuzz with trinity for a bit, doing that with > the code above worked flawlessly. > > While it works, it's obviously wrong. Why does it work though? Usually > there's a list op happening pretty soon after that which brings the > list into proper state. > > I've been playing with a patch that adds a magic value into list_head > if CONFIG_DEBUG_LIST is set, and checks that magic in the list debug > code in lib/list_debug.c. > > Does it sound like something useful? If so I'll send that patch out. Most of the calls to this initialization function apply it on zeroed memory (static/kzalloc'd...), which makes it useless. I'd actually be in favor of removing those redundant calls (as I pointed out in another email), and document that zeroed memory don't need to be explicitly initialized. Those sites that need to really reinitialize memory, or initialize it (if located on the stack or in non-zeroed dynamically allocated memory) could use a memset to 0, which will likely be faster than setting to NULL on many architectures. About testing, I'd recommend taking the few sites that still need the initialization function, and just initialize the array with garbage before calling the initialization function. Things should blow up quite quickly. Doing it as a one-off thing might be enough to catch any issue. I don't think we need extra magic numbers to catch issues in this rather obvious init function. Thanks, Mathieu > > > Thanks, > Sasha
Hello, On Mon, Oct 29, 2012 at 12:14:12PM -0400, Mathieu Desnoyers wrote: > Most of the calls to this initialization function apply it on zeroed > memory (static/kzalloc'd...), which makes it useless. I'd actually be in > favor of removing those redundant calls (as I pointed out in another > email), and document that zeroed memory don't need to be explicitly > initialized. > > Those sites that need to really reinitialize memory, or initialize it > (if located on the stack or in non-zeroed dynamically allocated memory) > could use a memset to 0, which will likely be faster than setting to > NULL on many architectures. I don't think it's a good idea to optimize out the basic encapsulation there. We're talking about re-zeroing some static memory areas which are pretty small. It's just not worth optimizing out at the cost of proper initializtion. e.g. We might add debug fields to list_head later. Thanks.
* Tejun Heo (tj@kernel.org) wrote: > Hello, > > On Mon, Oct 29, 2012 at 12:14:12PM -0400, Mathieu Desnoyers wrote: > > Most of the calls to this initialization function apply it on zeroed > > memory (static/kzalloc'd...), which makes it useless. I'd actually be in > > favor of removing those redundant calls (as I pointed out in another > > email), and document that zeroed memory don't need to be explicitly > > initialized. > > > > Those sites that need to really reinitialize memory, or initialize it > > (if located on the stack or in non-zeroed dynamically allocated memory) > > could use a memset to 0, which will likely be faster than setting to > > NULL on many architectures. > > I don't think it's a good idea to optimize out the basic encapsulation > there. We're talking about re-zeroing some static memory areas which > are pretty small. It's just not worth optimizing out at the cost of > proper initializtion. e.g. We might add debug fields to list_head > later. Future-proofness for debugging fields is indeed a very compelling argument. Fair enough! We might want to document this intent at the top of the initialization function though, just in case anyone want to short-circuit it. Thanks, Mathieu
On Mon, Oct 29, 2012 at 12:14 PM, Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote: > * Sasha Levin (levinsasha928@gmail.com) wrote: >> On Mon, Oct 29, 2012 at 7:29 AM, Mathieu Desnoyers >> <mathieu.desnoyers@efficios.com> wrote: >> > * Sasha Levin (levinsasha928@gmail.com) wrote: >> >> + >> >> + for (i = 0; i < sz; i++) >> >> + INIT_HLIST_HEAD(&ht[sz]); >> > >> > ouch. How did this work ? Has it been tested at all ? >> > >> > sz -> i >> >> Funny enough, it works perfectly. Generally as a test I boot the >> kernel in a VM and let it fuzz with trinity for a bit, doing that with >> the code above worked flawlessly. >> >> While it works, it's obviously wrong. Why does it work though? Usually >> there's a list op happening pretty soon after that which brings the >> list into proper state. >> >> I've been playing with a patch that adds a magic value into list_head >> if CONFIG_DEBUG_LIST is set, and checks that magic in the list debug >> code in lib/list_debug.c. >> >> Does it sound like something useful? If so I'll send that patch out. > > Most of the calls to this initialization function apply it on zeroed > memory (static/kzalloc'd...), which makes it useless. I'd actually be in > favor of removing those redundant calls (as I pointed out in another > email), and document that zeroed memory don't need to be explicitly > initialized. Why would that make it useless? The idea is that the init functions will set the magic field to something random, like: .magic = 0xBADBEEF0; And have list_add() and friends WARN(.magic != 0xBADBEEF0, "Using an uninitialized list\n"); This way we'll catch all places that don't go through list initialization code. Thanks, Sasha -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
* Sasha Levin (levinsasha928@gmail.com) wrote: > On Mon, Oct 29, 2012 at 12:14 PM, Mathieu Desnoyers > <mathieu.desnoyers@efficios.com> wrote: > > * Sasha Levin (levinsasha928@gmail.com) wrote: > >> On Mon, Oct 29, 2012 at 7:29 AM, Mathieu Desnoyers > >> <mathieu.desnoyers@efficios.com> wrote: > >> > * Sasha Levin (levinsasha928@gmail.com) wrote: > >> >> + > >> >> + for (i = 0; i < sz; i++) > >> >> + INIT_HLIST_HEAD(&ht[sz]); > >> > > >> > ouch. How did this work ? Has it been tested at all ? > >> > > >> > sz -> i > >> > >> Funny enough, it works perfectly. Generally as a test I boot the > >> kernel in a VM and let it fuzz with trinity for a bit, doing that with > >> the code above worked flawlessly. > >> > >> While it works, it's obviously wrong. Why does it work though? Usually > >> there's a list op happening pretty soon after that which brings the > >> list into proper state. > >> > >> I've been playing with a patch that adds a magic value into list_head > >> if CONFIG_DEBUG_LIST is set, and checks that magic in the list debug > >> code in lib/list_debug.c. > >> > >> Does it sound like something useful? If so I'll send that patch out. > > > > Most of the calls to this initialization function apply it on zeroed > > memory (static/kzalloc'd...), which makes it useless. I'd actually be in > > favor of removing those redundant calls (as I pointed out in another > > email), and document that zeroed memory don't need to be explicitly > > initialized. > > Why would that make it useless? The idea is that the init functions > will set the magic field to something random, like: > > .magic = 0xBADBEEF0; > > And have list_add() and friends WARN(.magic != 0xBADBEEF0, "Using an > uninitialized list\n"); > > This way we'll catch all places that don't go through list initialization code. As I replied to Tejun Heo already, I agree that keeping the initialization in place makes sense for future-proofness. This intent should probably be documented in a comment about the initialization function though, just to make sure nobody will try to skip it. Thanks, Mathieu > > > Thanks, > Sasha
diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h new file mode 100644 index 0000000..1fb8c97 --- /dev/null +++ b/include/linux/hashtable.h @@ -0,0 +1,193 @@ +/* + * Statically sized hash table implementation + * (C) 2012 Sasha Levin <levinsasha928@gmail.com> + */ + +#ifndef _LINUX_HASHTABLE_H +#define _LINUX_HASHTABLE_H + +#include <linux/list.h> +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/hash.h> +#include <linux/rculist.h> + +#define DEFINE_HASHTABLE(name, bits) \ + struct hlist_head name[1 << bits] = \ + { [0 ... ((1 << bits) - 1)] = HLIST_HEAD_INIT } + +#define DECLARE_HASHTABLE(name, bits) \ + struct hlist_head name[1 << (bits)] + +#define HASH_SIZE(name) (ARRAY_SIZE(name)) +#define HASH_BITS(name) ilog2(HASH_SIZE(name)) + +/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */ +#define hash_min(val, bits) \ +({ \ + sizeof(val) <= 4 ? \ + hash_32(val, bits) : \ + hash_long(val, bits); \ +}) + +static inline void __hash_init(struct hlist_head *ht, int sz) +{ + int i; + + for (i = 0; i < sz; i++) + INIT_HLIST_HEAD(&ht[sz]); +} + +/** + * hash_init - initialize a hash table + * @hashtable: hashtable to be initialized + * + * Calculates the size of the hashtable from the given parameter, otherwise + * same as hash_init_size. + * + * This has to be a macro since HASH_BITS() will not work on pointers since + * it calculates the size during preprocessing. + */ +#define hash_init(hashtable) __hash_init(hashtable, HASH_SIZE(hashtable)) + +/** + * hash_add - add an object to a hashtable + * @hashtable: hashtable to add to + * @node: the &struct hlist_node of the object to be added + * @key: the key of the object to be added + */ +#define hash_add(hashtable, node, key) \ + hlist_add_head(node, &hashtable[hash_min(key, HASH_BITS(hashtable))]); + +/** + * hash_add_rcu - add an object to a rcu enabled hashtable + * @hashtable: hashtable to add to + * @node: the &struct hlist_node of the object to be added + * @key: the key of the object to be added + */ +#define hash_add_rcu(hashtable, node, key) \ + hlist_add_head_rcu(node, &hashtable[hash_min(key, HASH_BITS(hashtable))]); + +/** + * hash_hashed - check whether an object is in any hashtable + * @node: the &struct hlist_node of the object to be checked + */ +#define hash_hashed(node) (!hlist_unhashed(node)) + +static inline bool __hash_empty(struct hlist_head *ht, int sz) +{ + int i; + + for (i = 0; i < sz; i++) + if (!hlist_empty(&ht[i])) + return false; + + return true; +} + +/** + * hash_empty - check whether a hashtable is empty + * @hashtable: hashtable to check + * + * This has to be a macro since HASH_BITS() will not work on pointers since + * it calculates the size during preprocessing. + */ +#define hash_empty(hashtable) __hash_empty(hashtable, HASH_SIZE(hashtable)) + +/** + * hash_del - remove an object from a hashtable + * @node: &struct hlist_node of the object to remove + */ +static inline void hash_del(struct hlist_node *node) +{ + hlist_del_init(node); +} + +/** + * hash_del_rcu - remove an object from a rcu enabled hashtable + * @node: &struct hlist_node of the object to remove + */ +static inline void hash_del_rcu(struct hlist_node *node) +{ + hlist_del_init_rcu(node); +} + +/** + * hash_for_each - iterate over a hashtable + * @name: hashtable to iterate + * @bkt: integer to use as bucket loop cursor + * @node: the &struct list_head to use as a loop cursor for each entry + * @obj: the type * to use as a loop cursor for each entry + * @member: the name of the hlist_node within the struct + */ +#define hash_for_each(name, bkt, node, obj, member) \ + for (bkt = 0, node = NULL; node == NULL && bkt < HASH_SIZE(name); bkt++)\ + hlist_for_each_entry(obj, node, &name[bkt], member) + +/** + * hash_for_each_rcu - iterate over a rcu enabled hashtable + * @name: hashtable to iterate + * @bkt: integer to use as bucket loop cursor + * @node: the &struct list_head to use as a loop cursor for each entry + * @obj: the type * to use as a loop cursor for each entry + * @member: the name of the hlist_node within the struct + */ +#define hash_for_each_rcu(name, bkt, node, obj, member) \ + for (bkt = 0, node = NULL; node == NULL && bkt < HASH_SIZE(name); bkt++)\ + hlist_for_each_entry_rcu(obj, node, &name[bkt], member) + +/** + * hash_for_each_safe - iterate over a hashtable safe against removal of + * hash entry + * @name: hashtable to iterate + * @bkt: integer to use as bucket loop cursor + * @node: the &struct list_head to use as a loop cursor for each entry + * @tmp: a &struct used for temporary storage + * @obj: the type * to use as a loop cursor for each entry + * @member: the name of the hlist_node within the struct + */ +#define hash_for_each_safe(name, bkt, node, tmp, obj, member) \ + for (bkt = 0, node = NULL; node == NULL && bkt < HASH_SIZE(name); bkt++)\ + hlist_for_each_entry_safe(obj, node, tmp, &name[bkt], member) + +/** + * hash_for_each_possible - iterate over all possible objects hashing to the + * same bucket + * @name: hashtable to iterate + * @obj: the type * to use as a loop cursor for each entry + * @node: the &struct list_head to use as a loop cursor for each entry + * @member: the name of the hlist_node within the struct + * @key: the key of the objects to iterate over + */ +#define hash_for_each_possible(name, obj, node, member, key) \ + hlist_for_each_entry(obj, node, &name[hash_min(key, HASH_BITS(name))], member) + +/** + * hash_for_each_possible_rcu - iterate over all possible objects hashing to the + * same bucket in an rcu enabled hashtable + * in a rcu enabled hashtable + * @name: hashtable to iterate + * @obj: the type * to use as a loop cursor for each entry + * @node: the &struct list_head to use as a loop cursor for each entry + * @member: the name of the hlist_node within the struct + * @key: the key of the objects to iterate over + */ +#define hash_for_each_possible_rcu(name, obj, node, member, key) \ + hlist_for_each_entry_rcu(obj, node, &name[hash_min(key, HASH_BITS(name))], member) + +/** + * hash_for_each_possible_safe - iterate over all possible objects hashing to the + * same bucket safe against removals + * @name: hashtable to iterate + * @obj: the type * to use as a loop cursor for each entry + * @node: the &struct list_head to use as a loop cursor for each entry + * @tmp: a &struct used for temporary storage + * @member: the name of the hlist_node within the struct + * @key: the key of the objects to iterate over + */ +#define hash_for_each_possible_safe(name, obj, node, tmp, member, key) \ + hlist_for_each_entry_safe(obj, node, tmp, \ + &name[hash_min(key, HASH_BITS(name))], member) + + +#endif
This hashtable implementation is using hlist buckets to provide a simple hashtable to prevent it from getting reimplemented all over the kernel. Signed-off-by: Sasha Levin <levinsasha928@gmail.com> --- Sorry for the long delay, I was busy with a bunch of personal things. Changes since v6: - Use macros that point to internal static inline functions instead of implementing everything as a macro. - Rebase on latest -next. - Resending the enter patch series on request. - Break early from hash_empty() if found to be non-empty. - DECLARE_HASHTABLE/DEFINE_HASHTABLE. include/linux/hashtable.h | 193 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 193 insertions(+) create mode 100644 include/linux/hashtable.h