Message ID | 20200330023248.164994-11-joel@joelfernandes.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kfree_rcu() improvements for -rcu dev | expand |
Hi "Joel, Thank you for the patch! Yet something to improve: [auto build test ERROR on rcu/dev] [also build test ERROR on rcu/rcu/next next-20200327] [cannot apply to linus/master linux/master v5.6] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Joel-Fernandes-Google/kfree_rcu-improvements-for-rcu-dev/20200330-113719 base: https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev config: powerpc-defconfig (attached as .config) compiler: powerpc64-linux-gcc (GCC) 9.3.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree GCC_VERSION=9.3.0 make.cross ARCH=powerpc If you fix the issue, kindly add following tag Reported-by: kbuild test robot <lkp@intel.com> All errors (new ones prefixed by >>): kernel/rcu/tree.c: In function 'kfree_rcu_work': >> kernel/rcu/tree.c:2946:4: error: implicit declaration of function 'vfree'; did you mean 'kfree'? [-Werror=implicit-function-declaration] 2946 | vfree(bvhead->records[i]); | ^~~~~ | kfree cc1: some warnings being treated as errors vim +2946 kernel/rcu/tree.c 2884 2885 /* 2886 * This function is invoked in workqueue context after a grace period. 2887 * It frees all the objects queued on ->bhead_free or ->head_free. 2888 */ 2889 static void kfree_rcu_work(struct work_struct *work) 2890 { 2891 unsigned long flags; 2892 struct kvfree_rcu_bulk_data *bkhead, *bknext; 2893 struct kvfree_rcu_bulk_data *bvhead, *bvnext; 2894 struct rcu_head *head, *next; 2895 struct kfree_rcu_cpu *krcp; 2896 struct kfree_rcu_cpu_work *krwp; 2897 int i; 2898 2899 krwp = container_of(to_rcu_work(work), 2900 struct kfree_rcu_cpu_work, rcu_work); 2901 2902 krcp = krwp->krcp; 2903 spin_lock_irqsave(&krcp->lock, flags); 2904 /* Channel 1. */ 2905 bkhead = krwp->bkvhead_free[0]; 2906 krwp->bkvhead_free[0] = NULL; 2907 2908 /* Channel 2. */ 2909 bvhead = krwp->bkvhead_free[1]; 2910 krwp->bkvhead_free[1] = NULL; 2911 2912 /* Channel 3. */ 2913 head = krwp->head_free; 2914 krwp->head_free = NULL; 2915 spin_unlock_irqrestore(&krcp->lock, flags); 2916 2917 /* kmalloc()/kfree() channel. */ 2918 for (; bkhead; bkhead = bknext) { 2919 bknext = bkhead->next; 2920 2921 debug_rcu_bhead_unqueue(bkhead); 2922 2923 rcu_lock_acquire(&rcu_callback_map); 2924 trace_rcu_invoke_kfree_bulk_callback(rcu_state.name, 2925 bkhead->nr_records, bkhead->records); 2926 2927 kfree_bulk(bkhead->nr_records, bkhead->records); 2928 rcu_lock_release(&rcu_callback_map); 2929 2930 if (cmpxchg(&krcp->bkvcache[0], NULL, bkhead)) 2931 free_page((unsigned long) bkhead); 2932 2933 cond_resched_tasks_rcu_qs(); 2934 } 2935 2936 /* vmalloc()/vfree() channel. */ 2937 for (; bvhead; bvhead = bvnext) { 2938 bvnext = bvhead->next; 2939 2940 debug_rcu_bhead_unqueue(bvhead); 2941 2942 rcu_lock_acquire(&rcu_callback_map); 2943 for (i = 0; i < bvhead->nr_records; i++) { 2944 trace_rcu_invoke_kvfree_callback(rcu_state.name, 2945 (struct rcu_head *) bvhead->records[i], 0); > 2946 vfree(bvhead->records[i]); 2947 } 2948 rcu_lock_release(&rcu_callback_map); 2949 2950 if (cmpxchg(&krcp->bkvcache[1], NULL, bvhead)) 2951 free_page((unsigned long) bvhead); 2952 2953 cond_resched_tasks_rcu_qs(); 2954 } 2955 2956 /* 2957 * This path covers emergency case only due to high 2958 * memory pressure also means low memory condition, 2959 * when we could not allocate a bulk array. 2960 * 2961 * Under that condition an object is queued to the 2962 * list instead. 2963 */ 2964 for (; head; head = next) { 2965 unsigned long offset = (unsigned long)head->func; 2966 void *ptr = (void *)head - offset; 2967 2968 next = head->next; 2969 debug_rcu_head_unqueue((struct rcu_head *)ptr); 2970 rcu_lock_acquire(&rcu_callback_map); 2971 trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); 2972 2973 if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) 2974 kvfree(ptr); 2975 2976 rcu_lock_release(&rcu_callback_map); 2977 cond_resched_tasks_rcu_qs(); 2978 } 2979 } 2980 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hi "Joel, Thank you for the patch! Yet something to improve: [auto build test ERROR on rcu/dev] [also build test ERROR on rcu/rcu/next next-20200327] [cannot apply to linus/master linux/master v5.6] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Joel-Fernandes-Google/kfree_rcu-improvements-for-rcu-dev/20200330-113719 base: https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev config: mips-randconfig-a001-20200330 (attached as .config) compiler: mips64el-linux-gcc (GCC) 5.5.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree GCC_VERSION=5.5.0 make.cross ARCH=mips If you fix the issue, kindly add following tag Reported-by: kbuild test robot <lkp@intel.com> All errors (new ones prefixed by >>): kernel/rcu/tree.c: In function 'kfree_rcu_work': >> kernel/rcu/tree.c:2946:4: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration] vfree(bvhead->records[i]); ^ cc1: some warnings being treated as errors vim +/vfree +2946 kernel/rcu/tree.c 2884 2885 /* 2886 * This function is invoked in workqueue context after a grace period. 2887 * It frees all the objects queued on ->bhead_free or ->head_free. 2888 */ 2889 static void kfree_rcu_work(struct work_struct *work) 2890 { 2891 unsigned long flags; 2892 struct kvfree_rcu_bulk_data *bkhead, *bknext; 2893 struct kvfree_rcu_bulk_data *bvhead, *bvnext; 2894 struct rcu_head *head, *next; 2895 struct kfree_rcu_cpu *krcp; 2896 struct kfree_rcu_cpu_work *krwp; 2897 int i; 2898 2899 krwp = container_of(to_rcu_work(work), 2900 struct kfree_rcu_cpu_work, rcu_work); 2901 2902 krcp = krwp->krcp; 2903 spin_lock_irqsave(&krcp->lock, flags); 2904 /* Channel 1. */ 2905 bkhead = krwp->bkvhead_free[0]; 2906 krwp->bkvhead_free[0] = NULL; 2907 2908 /* Channel 2. */ 2909 bvhead = krwp->bkvhead_free[1]; 2910 krwp->bkvhead_free[1] = NULL; 2911 2912 /* Channel 3. */ 2913 head = krwp->head_free; 2914 krwp->head_free = NULL; 2915 spin_unlock_irqrestore(&krcp->lock, flags); 2916 2917 /* kmalloc()/kfree() channel. */ 2918 for (; bkhead; bkhead = bknext) { 2919 bknext = bkhead->next; 2920 2921 debug_rcu_bhead_unqueue(bkhead); 2922 2923 rcu_lock_acquire(&rcu_callback_map); 2924 trace_rcu_invoke_kfree_bulk_callback(rcu_state.name, 2925 bkhead->nr_records, bkhead->records); 2926 2927 kfree_bulk(bkhead->nr_records, bkhead->records); 2928 rcu_lock_release(&rcu_callback_map); 2929 2930 if (cmpxchg(&krcp->bkvcache[0], NULL, bkhead)) 2931 free_page((unsigned long) bkhead); 2932 2933 cond_resched_tasks_rcu_qs(); 2934 } 2935 2936 /* vmalloc()/vfree() channel. */ 2937 for (; bvhead; bvhead = bvnext) { 2938 bvnext = bvhead->next; 2939 2940 debug_rcu_bhead_unqueue(bvhead); 2941 2942 rcu_lock_acquire(&rcu_callback_map); 2943 for (i = 0; i < bvhead->nr_records; i++) { 2944 trace_rcu_invoke_kvfree_callback(rcu_state.name, 2945 (struct rcu_head *) bvhead->records[i], 0); > 2946 vfree(bvhead->records[i]); 2947 } 2948 rcu_lock_release(&rcu_callback_map); 2949 2950 if (cmpxchg(&krcp->bkvcache[1], NULL, bvhead)) 2951 free_page((unsigned long) bvhead); 2952 2953 cond_resched_tasks_rcu_qs(); 2954 } 2955 2956 /* 2957 * This path covers emergency case only due to high 2958 * memory pressure also means low memory condition, 2959 * when we could not allocate a bulk array. 2960 * 2961 * Under that condition an object is queued to the 2962 * list instead. 2963 */ 2964 for (; head; head = next) { 2965 unsigned long offset = (unsigned long)head->func; 2966 void *ptr = (void *)head - offset; 2967 2968 next = head->next; 2969 debug_rcu_head_unqueue((struct rcu_head *)ptr); 2970 rcu_lock_acquire(&rcu_callback_map); 2971 trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); 2972 2973 if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) 2974 kvfree(ptr); 2975 2976 rcu_lock_release(&rcu_callback_map); 2977 cond_resched_tasks_rcu_qs(); 2978 } 2979 } 2980 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hello, Joel. Sent out the patch fixing build error. -- Vlad Rezki > Hi "Joel, > > Thank you for the patch! Yet something to improve: > > [auto build test ERROR on rcu/dev] > [also build test ERROR on rcu/rcu/next next-20200327] > [cannot apply to linus/master linux/master v5.6] > [if your patch is applied to the wrong git tree, please drop us a note to help > improve the system. BTW, we also suggest to use '--base' option to specify the > base tree in git format-patch, please see https://stackoverflow.com/a/37406982] > > url: https://github.com/0day-ci/linux/commits/Joel-Fernandes-Google/kfree_rcu-improvements-for-rcu-dev/20200330-113719 > base: https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev > config: mips-randconfig-a001-20200330 (attached as .config) > compiler: mips64el-linux-gcc (GCC) 5.5.0 > reproduce: > wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross > chmod +x ~/bin/make.cross > # save the attached .config to linux build tree > GCC_VERSION=5.5.0 make.cross ARCH=mips > > If you fix the issue, kindly add following tag > Reported-by: kbuild test robot <lkp@intel.com> > > All errors (new ones prefixed by >>): > > kernel/rcu/tree.c: In function 'kfree_rcu_work': > >> kernel/rcu/tree.c:2946:4: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration] > vfree(bvhead->records[i]); > ^ > cc1: some warnings being treated as errors > > vim +/vfree +2946 kernel/rcu/tree.c > > 2884 > 2885 /* > 2886 * This function is invoked in workqueue context after a grace period. > 2887 * It frees all the objects queued on ->bhead_free or ->head_free. > 2888 */ > 2889 static void kfree_rcu_work(struct work_struct *work) > 2890 { > 2891 unsigned long flags; > 2892 struct kvfree_rcu_bulk_data *bkhead, *bknext; > 2893 struct kvfree_rcu_bulk_data *bvhead, *bvnext; > 2894 struct rcu_head *head, *next; > 2895 struct kfree_rcu_cpu *krcp; > 2896 struct kfree_rcu_cpu_work *krwp; > 2897 int i; > 2898 > 2899 krwp = container_of(to_rcu_work(work), > 2900 struct kfree_rcu_cpu_work, rcu_work); > 2901 > 2902 krcp = krwp->krcp; > 2903 spin_lock_irqsave(&krcp->lock, flags); > 2904 /* Channel 1. */ > 2905 bkhead = krwp->bkvhead_free[0]; > 2906 krwp->bkvhead_free[0] = NULL; > 2907 > 2908 /* Channel 2. */ > 2909 bvhead = krwp->bkvhead_free[1]; > 2910 krwp->bkvhead_free[1] = NULL; > 2911 > 2912 /* Channel 3. */ > 2913 head = krwp->head_free; > 2914 krwp->head_free = NULL; > 2915 spin_unlock_irqrestore(&krcp->lock, flags); > 2916 > 2917 /* kmalloc()/kfree() channel. */ > 2918 for (; bkhead; bkhead = bknext) { > 2919 bknext = bkhead->next; > 2920 > 2921 debug_rcu_bhead_unqueue(bkhead); > 2922 > 2923 rcu_lock_acquire(&rcu_callback_map); > 2924 trace_rcu_invoke_kfree_bulk_callback(rcu_state.name, > 2925 bkhead->nr_records, bkhead->records); > 2926 > 2927 kfree_bulk(bkhead->nr_records, bkhead->records); > 2928 rcu_lock_release(&rcu_callback_map); > 2929 > 2930 if (cmpxchg(&krcp->bkvcache[0], NULL, bkhead)) > 2931 free_page((unsigned long) bkhead); > 2932 > 2933 cond_resched_tasks_rcu_qs(); > 2934 } > 2935 > 2936 /* vmalloc()/vfree() channel. */ > 2937 for (; bvhead; bvhead = bvnext) { > 2938 bvnext = bvhead->next; > 2939 > 2940 debug_rcu_bhead_unqueue(bvhead); > 2941 > 2942 rcu_lock_acquire(&rcu_callback_map); > 2943 for (i = 0; i < bvhead->nr_records; i++) { > 2944 trace_rcu_invoke_kvfree_callback(rcu_state.name, > 2945 (struct rcu_head *) bvhead->records[i], 0); > > 2946 vfree(bvhead->records[i]); > 2947 } > 2948 rcu_lock_release(&rcu_callback_map); > 2949 > 2950 if (cmpxchg(&krcp->bkvcache[1], NULL, bvhead)) > 2951 free_page((unsigned long) bvhead); > 2952 > 2953 cond_resched_tasks_rcu_qs(); > 2954 } > 2955 > 2956 /* > 2957 * This path covers emergency case only due to high > 2958 * memory pressure also means low memory condition, > 2959 * when we could not allocate a bulk array. > 2960 * > 2961 * Under that condition an object is queued to the > 2962 * list instead. > 2963 */ > 2964 for (; head; head = next) { > 2965 unsigned long offset = (unsigned long)head->func; > 2966 void *ptr = (void *)head - offset; > 2967 > 2968 next = head->next; > 2969 debug_rcu_head_unqueue((struct rcu_head *)ptr); > 2970 rcu_lock_acquire(&rcu_callback_map); > 2971 trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); > 2972 > 2973 if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) > 2974 kvfree(ptr); > 2975 > 2976 rcu_lock_release(&rcu_callback_map); > 2977 cond_resched_tasks_rcu_qs(); > 2978 } > 2979 } > 2980 > > --- > 0-DAY CI Kernel Test Service, Intel Corporation > https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On Mon, Mar 30, 2020 at 05:29:51PM +0200, Uladzislau Rezki wrote: > Hello, Joel. > > Sent out the patch fixing build error. ... where? It didn't get cc'd to linux-mm?
On Mon, Mar 30, 2020 at 08:31:49AM -0700, Matthew Wilcox wrote: > On Mon, Mar 30, 2020 at 05:29:51PM +0200, Uladzislau Rezki wrote: > > Hello, Joel. > > > > Sent out the patch fixing build error. > > ... where? It didn't get cc'd to linux-mm? The kbuild test robot complained. Prior than the build error, the patch didn't seem all that relevant to linux-mm. ;-) Thanx, Paul
On Mon, Mar 30, 2020 at 08:37:02AM -0700, Paul E. McKenney wrote: > On Mon, Mar 30, 2020 at 08:31:49AM -0700, Matthew Wilcox wrote: > > On Mon, Mar 30, 2020 at 05:29:51PM +0200, Uladzislau Rezki wrote: > > > Hello, Joel. > > > > > > Sent out the patch fixing build error. > > > > ... where? It didn't get cc'd to linux-mm? > > The kbuild test robot complained. Prior than the build error, the > patch didn't seem all that relevant to linux-mm. ;-) I asked the preprocessor to tell me why I didn't hit this in my tree. Seems it because vmalloc.h is included in my tree through the following includes. ./include/linux/nmi.h ./arch/x86/include/asm/nmi.h ./arch/x86/include/asm/io.h ./include/asm-generic/io.h ./include/linux/vmalloc.h Such paths may not exist in kbuild robot's tree, so I will apply Vlad's patch to fix this and push it to my rcu/kfree branch. thanks, - Joel
On Mon, Mar 30, 2020 at 01:16:06PM -0400, Joel Fernandes wrote: > On Mon, Mar 30, 2020 at 08:37:02AM -0700, Paul E. McKenney wrote: > > On Mon, Mar 30, 2020 at 08:31:49AM -0700, Matthew Wilcox wrote: > > > On Mon, Mar 30, 2020 at 05:29:51PM +0200, Uladzislau Rezki wrote: > > > > Hello, Joel. > > > > > > > > Sent out the patch fixing build error. > > > > > > ... where? It didn't get cc'd to linux-mm? > > > > The kbuild test robot complained. Prior than the build error, the > > patch didn't seem all that relevant to linux-mm. ;-) > > I asked the preprocessor to tell me why I didn't hit this in my tree. Seems > it because vmalloc.h is included in my tree through the following includes. > Same to me, i did not manage to hit that build error. -- Vlad Rezki
On Mon, Mar 30, 2020 at 07:43:38PM +0200, Uladzislau Rezki wrote: > On Mon, Mar 30, 2020 at 01:16:06PM -0400, Joel Fernandes wrote: > > On Mon, Mar 30, 2020 at 08:37:02AM -0700, Paul E. McKenney wrote: > > > On Mon, Mar 30, 2020 at 08:31:49AM -0700, Matthew Wilcox wrote: > > > > On Mon, Mar 30, 2020 at 05:29:51PM +0200, Uladzislau Rezki wrote: > > > > > Hello, Joel. > > > > > > > > > > Sent out the patch fixing build error. > > > > > > > > ... where? It didn't get cc'd to linux-mm? > > > > > > The kbuild test robot complained. Prior than the build error, the > > > patch didn't seem all that relevant to linux-mm. ;-) > > > > I asked the preprocessor to tell me why I didn't hit this in my tree. Seems > > it because vmalloc.h is included in my tree through the following includes. > > > Same to me, i did not manage to hit that build error. This is a common occurrence for me. The kbuild test robot can be very helpful for this sort of thing. ;-) Thanx, Paul
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index cfe456e68c644..8fbc8450284db 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2808,38 +2808,36 @@ EXPORT_SYMBOL_GPL(call_rcu); #define KFREE_N_BATCHES 2 /** - * struct kfree_rcu_bulk_data - single block to store kfree_rcu() pointers + * struct kvfree_rcu_bulk_data - single block to store kvfree() pointers * @nr_records: Number of active pointers in the array - * @records: Array of the kfree_rcu() pointers * @next: Next bulk object in the block chain - * @head_free_debug: For debug, when CONFIG_DEBUG_OBJECTS_RCU_HEAD is set + * @records: Array of the SLAB pointers */ -struct kfree_rcu_bulk_data { +struct kvfree_rcu_bulk_data { unsigned long nr_records; - void *records[KFREE_BULK_MAX_ENTR]; - struct kfree_rcu_bulk_data *next; + struct kvfree_rcu_bulk_data *next; + void *records[]; }; /* * This macro defines how many entries the "records" array * will contain. It is based on the fact that the size of - * kfree_rcu_bulk_data structure becomes exactly one page. + * kvfree_rcu_bulk_data become exactly one page. */ -#define KFREE_BULK_MAX_ENTR \ - ((PAGE_SIZE - sizeof(struct kfree_rcu_bulk_data)) / sizeof(void *)) +#define KVFREE_BULK_MAX_ENTR \ + ((PAGE_SIZE - sizeof(struct kvfree_rcu_bulk_data)) / sizeof(void *)) /** * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period * @head_free: List of kfree_rcu() objects waiting for a grace period - * @bhead_free: Bulk-List of kfree_rcu() objects waiting for a grace period + * @bkvhead_free: Bulk-List of kfree_rcu() objects waiting for a grace period * @krcp: Pointer to @kfree_rcu_cpu structure */ - struct kfree_rcu_cpu_work { struct rcu_work rcu_work; struct rcu_head *head_free; - struct kfree_rcu_bulk_data *bhead_free; + struct kvfree_rcu_bulk_data *bkvhead_free[2]; struct kfree_rcu_cpu *krcp; }; @@ -2861,8 +2859,9 @@ struct kfree_rcu_cpu_work { */ struct kfree_rcu_cpu { struct rcu_head *head; - struct kfree_rcu_bulk_data *bhead; - struct kfree_rcu_bulk_data *bcached; + struct kvfree_rcu_bulk_data *bkvhead[2]; + struct kvfree_rcu_bulk_data *bkvcache[2]; + struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; spinlock_t lock; struct delayed_work monitor_work; @@ -2875,7 +2874,7 @@ struct kfree_rcu_cpu { static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc); static __always_inline void -debug_rcu_bhead_unqueue(struct kfree_rcu_bulk_data *bhead) +debug_rcu_bhead_unqueue(struct kvfree_rcu_bulk_data *bhead) { #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD for (int i = 0; i < bhead->nr_records; i++) @@ -2890,45 +2889,77 @@ debug_rcu_bhead_unqueue(struct kfree_rcu_bulk_data *bhead) static void kfree_rcu_work(struct work_struct *work) { unsigned long flags; + struct kvfree_rcu_bulk_data *bkhead, *bknext; + struct kvfree_rcu_bulk_data *bvhead, *bvnext; struct rcu_head *head, *next; - struct kfree_rcu_bulk_data *bhead, *bnext; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; + int i; krwp = container_of(to_rcu_work(work), - struct kfree_rcu_cpu_work, rcu_work); + struct kfree_rcu_cpu_work, rcu_work); + krcp = krwp->krcp; spin_lock_irqsave(&krcp->lock, flags); + /* Channel 1. */ + bkhead = krwp->bkvhead_free[0]; + krwp->bkvhead_free[0] = NULL; + + /* Channel 2. */ + bvhead = krwp->bkvhead_free[1]; + krwp->bkvhead_free[1] = NULL; + + /* Channel 3. */ head = krwp->head_free; krwp->head_free = NULL; - bhead = krwp->bhead_free; - krwp->bhead_free = NULL; spin_unlock_irqrestore(&krcp->lock, flags); - /* "bhead" is now private, so traverse locklessly. */ - for (; bhead; bhead = bnext) { - bnext = bhead->next; + /* kmalloc()/kfree() channel. */ + for (; bkhead; bkhead = bknext) { + bknext = bkhead->next; - debug_rcu_bhead_unqueue(bhead); + debug_rcu_bhead_unqueue(bkhead); rcu_lock_acquire(&rcu_callback_map); trace_rcu_invoke_kfree_bulk_callback(rcu_state.name, - bhead->nr_records, bhead->records); + bkhead->nr_records, bkhead->records); + + kfree_bulk(bkhead->nr_records, bkhead->records); + rcu_lock_release(&rcu_callback_map); + + if (cmpxchg(&krcp->bkvcache[0], NULL, bkhead)) + free_page((unsigned long) bkhead); + + cond_resched_tasks_rcu_qs(); + } + + /* vmalloc()/vfree() channel. */ + for (; bvhead; bvhead = bvnext) { + bvnext = bvhead->next; + + debug_rcu_bhead_unqueue(bvhead); - kfree_bulk(bhead->nr_records, bhead->records); + rcu_lock_acquire(&rcu_callback_map); + for (i = 0; i < bvhead->nr_records; i++) { + trace_rcu_invoke_kvfree_callback(rcu_state.name, + (struct rcu_head *) bvhead->records[i], 0); + vfree(bvhead->records[i]); + } rcu_lock_release(&rcu_callback_map); - if (cmpxchg(&krcp->bcached, NULL, bhead)) - free_page((unsigned long) bhead); + if (cmpxchg(&krcp->bkvcache[1], NULL, bvhead)) + free_page((unsigned long) bvhead); cond_resched_tasks_rcu_qs(); } /* - * We can end up here either with 1) vmalloc() pointers or 2) were low - * on memory and could not allocate a bulk array. It can happen under - * low memory condition when an allocation gets failed, so the "bulk" - * path can not be temporarly used. + * This path covers emergency case only due to high + * memory pressure also means low memory condition, + * when we could not allocate a bulk array. + * + * Under that condition an object is queued to the + * list instead. */ for (; head; head = next) { unsigned long offset = (unsigned long)head->func; @@ -2965,21 +2996,34 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp) krwp = &(krcp->krw_arr[i]); /* - * Try to detach bhead or head and attach it over any + * Try to detach bkvhead or head and attach it over any * available corresponding free channel. It can be that * a previous RCU batch is in progress, it means that * immediately to queue another one is not possible so * return false to tell caller to retry. */ - if ((krcp->bhead && !krwp->bhead_free) || + if ((krcp->bkvhead[0] && !krwp->bkvhead_free[0]) || + (krcp->bkvhead[1] && !krwp->bkvhead_free[1]) || (krcp->head && !krwp->head_free)) { - /* Channel 1. */ - if (!krwp->bhead_free) { - krwp->bhead_free = krcp->bhead; - krcp->bhead = NULL; + /* + * Channel 1 corresponds to SLAB ptrs. + */ + if (!krwp->bkvhead_free[0]) { + krwp->bkvhead_free[0] = krcp->bkvhead[0]; + krcp->bkvhead[0] = NULL; + } + + /* + * Channel 2 corresponds to vmalloc ptrs. + */ + if (!krwp->bkvhead_free[1]) { + krwp->bkvhead_free[1] = krcp->bkvhead[1]; + krcp->bkvhead[1] = NULL; } - /* Channel 2. */ + /* + * Channel 3 corresponds to emergency path. + */ if (!krwp->head_free) { krwp->head_free = krcp->head; krcp->head = NULL; @@ -2988,10 +3032,11 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp) WRITE_ONCE(krcp->count, 0); /* - * One work is per one batch, so there are two "free channels", - * "bhead_free" and "head_free" the batch can handle. It can be - * that the work is in the pending state when two channels have - * been detached following each other, one by one. + * One work is per one batch, so there are three + * "free channels", the batch can handle. It can + * be that the work is in the pending state when + * channels have been detached following by each + * other. */ queue_rcu_work(system_wq, &krwp->rcu_work); queued = true; @@ -3036,26 +3081,25 @@ static void kfree_rcu_monitor(struct work_struct *work) } static inline bool -kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, - struct rcu_head *head, rcu_callback_t func) +kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr) { - struct kfree_rcu_bulk_data *bnode; + struct kvfree_rcu_bulk_data *bnode; + int idx; if (unlikely(!krcp->initialized)) return false; lockdep_assert_held(&krcp->lock); + idx = !is_vmalloc_addr(ptr) ? 0:1; /* Check if a new block is required. */ - if (!krcp->bhead || - krcp->bhead->nr_records == KFREE_BULK_MAX_ENTR) { - bnode = xchg(&krcp->bcached, NULL); - if (!bnode) { - WARN_ON_ONCE(sizeof(struct kfree_rcu_bulk_data) > PAGE_SIZE); - - bnode = (struct kfree_rcu_bulk_data *) + if (!krcp->bkvhead[idx] || + krcp->bkvhead[idx]->nr_records == + KVFREE_BULK_MAX_ENTR) { + bnode = xchg(&krcp->bkvcache[idx], NULL); + if (!bnode) + bnode = (struct kvfree_rcu_bulk_data *) __get_free_page(GFP_NOWAIT | __GFP_NOWARN); - } /* Switch to emergency path. */ if (unlikely(!bnode)) @@ -3063,30 +3107,30 @@ kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, /* Initialize the new block. */ bnode->nr_records = 0; - bnode->next = krcp->bhead; + bnode->next = krcp->bkvhead[idx]; /* Attach it to the head. */ - krcp->bhead = bnode; + krcp->bkvhead[idx] = bnode; } /* Finally insert. */ - krcp->bhead->records[krcp->bhead->nr_records++] = - (void *) head - (unsigned long) func; + krcp->bkvhead[idx]->records + [krcp->bkvhead[idx]->nr_records++] = ptr; return true; } /* - * Queue a request for lazy invocation of kfree_bulk()/kvfree() after a grace - * period. Please note there are two paths are maintained, one is the main one - * that uses kfree_bulk() interface and second one is emergency one, that is - * used only when the main path can not be maintained temporary, due to memory - * pressure. + * Queue a request for lazy invocation of appropriate free routine after a + * grace period. Please note there are three paths are maintained, two are the + * main ones that use array of pointers interface and third one is emergency + * one, that is used only when the main path can not be maintained temporary, + * due to memory pressure. * * Each kvfree_call_rcu() request is added to a batch. The batch will be drained * every KFREE_DRAIN_JIFFIES number of jiffies. All the objects in the batch will * be free'd in workqueue context. This allows us to: batch requests together to - * reduce the number of grace periods during heavy kfree_rcu() load. + * reduce the number of grace periods during heavy kfree_rcu()/kvfree_rcu() load. */ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) { @@ -3110,17 +3154,10 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) } /* - * We do not queue vmalloc pointers into array, - * instead they are just queued to the list. We - * do it because of: - * a) to distinguish kmalloc()/vmalloc() ptrs; - * b) there is no vmalloc_bulk() interface. - * * Under high memory pressure GFP_NOWAIT can fail, * in that case the emergency path is maintained. */ - if (is_vmalloc_addr(ptr) || - !kfree_call_rcu_add_ptr_to_bulk(krcp, head, func)) { + if (!kvfree_call_rcu_add_ptr_to_bulk(krcp, ptr)) { head->func = func; head->next = krcp->head; krcp->head = head;