Message ID | 20190403042127.18755-10-tobin@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Slab Movable Objects (SMO) | expand |
On Wed, Apr 03, 2019 at 03:21:22PM +1100, Tobin C. Harding wrote: > +void xa_object_migrate(struct xa_node *node, int numa_node) > +{ > + struct xarray *xa = READ_ONCE(node->array); > + void __rcu **slot; > + struct xa_node *new_node; > + int i; > + > + /* Freed or not yet in tree then skip */ > + if (!xa || xa == XA_RCU_FREE) > + return; > + > + new_node = kmem_cache_alloc_node(radix_tree_node_cachep, > + GFP_KERNEL, numa_node); > + if (!new_node) > + return; > + > + xa_lock_irq(xa); > + > + /* Check again..... */ > + if (xa != node->array || !list_empty(&node->private_list)) { > + node = new_node; > + goto unlock; > + } > + > + memcpy(new_node, node, sizeof(struct xa_node)); > + > + /* Move pointers to new node */ > + INIT_LIST_HEAD(&new_node->private_list); Surely we can do something more clever, like ... if (xa != node->array) { ... if (list_empty(&node->private_list)) INIT_LIST_HEAD(&new_node->private_list); else list_replace(&node->private_list, &new_node->private_list); BTW, the raidx tree nodes / xa_nodes share the same slab cache; we need to finish converting all radix tree & IDR users to the XArray before this series can go in.
On Wed, Apr 03, 2019 at 10:23:26AM -0700, Matthew Wilcox wrote: > On Wed, Apr 03, 2019 at 03:21:22PM +1100, Tobin C. Harding wrote: > > +void xa_object_migrate(struct xa_node *node, int numa_node) > > +{ > > + struct xarray *xa = READ_ONCE(node->array); > > + void __rcu **slot; > > + struct xa_node *new_node; > > + int i; > > + > > + /* Freed or not yet in tree then skip */ > > + if (!xa || xa == XA_RCU_FREE) > > + return; > > + > > + new_node = kmem_cache_alloc_node(radix_tree_node_cachep, > > + GFP_KERNEL, numa_node); > > + if (!new_node) > > + return; > > + > > + xa_lock_irq(xa); > > + > > + /* Check again..... */ > > + if (xa != node->array || !list_empty(&node->private_list)) { > > + node = new_node; > > + goto unlock; > > + } > > + > > + memcpy(new_node, node, sizeof(struct xa_node)); > > + > > + /* Move pointers to new node */ > > + INIT_LIST_HEAD(&new_node->private_list); > > Surely we can do something more clever, like ... > > if (xa != node->array) { > ... > if (list_empty(&node->private_list)) > INIT_LIST_HEAD(&new_node->private_list); > else > list_replace(&node->private_list, &new_node->private_list); Oh nice, thanks! I'll roll this into the next version. > BTW, the raidx tree nodes / xa_nodes share the same slab cache; we need > to finish converting all radix tree & IDR users to the XArray before > this series can go in. Ok, I'll add this comment to the commit log for this patch on the next version so we don't forget. FTR complete conversion to the XArray is your goal isn't it (on the way to the Maple tree)? thanks, Tobin.
diff --git a/lib/radix-tree.c b/lib/radix-tree.c index 14d51548bea6..9412c2853726 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -1613,6 +1613,17 @@ static int radix_tree_cpu_dead(unsigned int cpu) return 0; } +extern void xa_object_migrate(void *tree_node, int numa_node); + +static void radix_tree_migrate(struct kmem_cache *s, void **objects, int nr, + int node, void *private) +{ + int i; + + for (i = 0; i < nr; i++) + xa_object_migrate(objects[i], node); +} + void __init radix_tree_init(void) { int ret; @@ -1627,4 +1638,6 @@ void __init radix_tree_init(void) ret = cpuhp_setup_state_nocalls(CPUHP_RADIX_DEAD, "lib/radix:dead", NULL, radix_tree_cpu_dead); WARN_ON(ret < 0); + kmem_cache_setup_mobility(radix_tree_node_cachep, NULL, + radix_tree_migrate); } diff --git a/lib/xarray.c b/lib/xarray.c index 6be3acbb861f..6d2657f2e4cb 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1971,6 +1971,52 @@ void xa_destroy(struct xarray *xa) } EXPORT_SYMBOL(xa_destroy); +void xa_object_migrate(struct xa_node *node, int numa_node) +{ + struct xarray *xa = READ_ONCE(node->array); + void __rcu **slot; + struct xa_node *new_node; + int i; + + /* Freed or not yet in tree then skip */ + if (!xa || xa == XA_RCU_FREE) + return; + + new_node = kmem_cache_alloc_node(radix_tree_node_cachep, + GFP_KERNEL, numa_node); + if (!new_node) + return; + + xa_lock_irq(xa); + + /* Check again..... */ + if (xa != node->array || !list_empty(&node->private_list)) { + node = new_node; + goto unlock; + } + + memcpy(new_node, node, sizeof(struct xa_node)); + + /* Move pointers to new node */ + INIT_LIST_HEAD(&new_node->private_list); + for (i = 0; i < XA_CHUNK_SIZE; i++) { + void *x = xa_entry_locked(xa, new_node, i); + + if (xa_is_node(x)) + rcu_assign_pointer(xa_to_node(x)->parent, new_node); + } + if (!new_node->parent) + slot = &xa->xa_head; + else + slot = &xa_parent_locked(xa, new_node)->slots[new_node->offset]; + rcu_assign_pointer(*slot, xa_mk_node(new_node)); + +unlock: + xa_unlock_irq(xa); + xa_node_free(node); + rcu_barrier(); +} + #ifdef XA_DEBUG void xa_dump_node(const struct xa_node *node) {
Implement functions to migrate objects. This is based on initial code by Matthew Wilcox and was modified to work with slab object migration. Cc: Matthew Wilcox <willy@infradead.org> Co-developed-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tobin C. Harding <tobin@kernel.org> --- lib/radix-tree.c | 13 +++++++++++++ lib/xarray.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+)