Message ID | 1487588781-15123-4-git-send-email-elena.reshetova@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
From: Elena Reshetova <elena.reshetova@intel.com> Date: Mon, 20 Feb 2017 13:06:20 +0200 > refcount_t type and corresponding API should be > used instead of atomic_t when the variable is used as > a reference counter. This allows to avoid accidental > refcounter overflows that might lead to use-after-free > situations. > > Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> > Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> > Signed-off-by: Kees Cook <keescook@chromium.org> > Signed-off-by: David Windsor <dwindsor@gmail.com> Acked-by: David S. Miller <davem@davemloft.net>
> From: Elena Reshetova <elena.reshetova@intel.com> > Date: Mon, 20 Feb 2017 13:06:20 +0200 > > > refcount_t type and corresponding API should be > > used instead of atomic_t when the variable is used as > > a reference counter. This allows to avoid accidental > > refcounter overflows that might lead to use-after-free > > situations. > > > > Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> > > Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> > > Signed-off-by: Kees Cook <keescook@chromium.org> > > Signed-off-by: David Windsor <dwindsor@gmail.com> > > Acked-by: David S. Miller <davem@davemloft.net> Hi David, Would you be able to propagate this patch further or should I send it (with your acked-by) once more to specific list/maintainer for the inclusion? Best Regards, Elena
From: "Reshetova, Elena" <elena.reshetova@intel.com> Date: Mon, 3 Apr 2017 07:28:01 +0000 > >> From: Elena Reshetova <elena.reshetova@intel.com> >> Date: Mon, 20 Feb 2017 13:06:20 +0200 >> >> > refcount_t type and corresponding API should be >> > used instead of atomic_t when the variable is used as >> > a reference counter. This allows to avoid accidental >> > refcounter overflows that might lead to use-after-free >> > situations. >> > >> > Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> >> > Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> >> > Signed-off-by: Kees Cook <keescook@chromium.org> >> > Signed-off-by: David Windsor <dwindsor@gmail.com> >> >> Acked-by: David S. Miller <davem@davemloft.net> > > Hi David, > > Would you be able to propagate this patch further or should I send > it (with your acked-by) once more to specific list/maintainer for > the inclusion? I'm generally not happy with the refcount_t and the added overhead it has compared to the existing atomic_t operations. I know it is going to make a difference for networking. I understand that this sparc case is a slow path, but I know that if we just apply all of these refcount_t conversions, there will be no work done to address the performance issues. So I'm reluctant to ACK in any way these refcount_t conversions, sorry.
> From: "Reshetova, Elena" <elena.reshetova@intel.com> > Date: Mon, 3 Apr 2017 07:28:01 +0000 > > > > >> From: Elena Reshetova <elena.reshetova@intel.com> > >> Date: Mon, 20 Feb 2017 13:06:20 +0200 > >> > >> > refcount_t type and corresponding API should be > >> > used instead of atomic_t when the variable is used as > >> > a reference counter. This allows to avoid accidental > >> > refcounter overflows that might lead to use-after-free > >> > situations. > >> > > >> > Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> > >> > Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> > >> > Signed-off-by: Kees Cook <keescook@chromium.org> > >> > Signed-off-by: David Windsor <dwindsor@gmail.com> > >> > >> Acked-by: David S. Miller <davem@davemloft.net> > > > > Hi David, > > > > Would you be able to propagate this patch further or should I send > > it (with your acked-by) once more to specific list/maintainer for > > the inclusion? > > I'm generally not happy with the refcount_t and the added overhead it > has compared to the existing atomic_t operations. > > I know it is going to make a difference for networking. > > I understand that this sparc case is a slow path, but I know that if > we just apply all of these refcount_t conversions, there will be no > work done to address the performance issues. I think we will have to address the performance problems in places where we can see it matters. The problem is that so far noone told us how to measure in any reasonable way the overhead neither in networking, not in mm changes. If this change is a slow path, why would it matter for *this particular patch*? Best Regards, Elena. > So I'm reluctant to ACK in any way these refcount_t conversions, > sorry.
From: "Reshetova, Elena" <elena.reshetova@intel.com> Date: Mon, 3 Apr 2017 16:06:44 +0000 >> From: "Reshetova, Elena" <elena.reshetova@intel.com> >> Date: Mon, 3 Apr 2017 07:28:01 +0000 >> >> > >> >> From: Elena Reshetova <elena.reshetova@intel.com> >> >> Date: Mon, 20 Feb 2017 13:06:20 +0200 >> >> >> >> > refcount_t type and corresponding API should be >> >> > used instead of atomic_t when the variable is used as >> >> > a reference counter. This allows to avoid accidental >> >> > refcounter overflows that might lead to use-after-free >> >> > situations. >> >> > >> >> > Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> >> >> > Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> >> >> > Signed-off-by: Kees Cook <keescook@chromium.org> >> >> > Signed-off-by: David Windsor <dwindsor@gmail.com> >> >> >> >> Acked-by: David S. Miller <davem@davemloft.net> >> > >> > Hi David, >> > >> > Would you be able to propagate this patch further or should I send >> > it (with your acked-by) once more to specific list/maintainer for >> > the inclusion? >> >> I'm generally not happy with the refcount_t and the added overhead it >> has compared to the existing atomic_t operations. >> >> I know it is going to make a difference for networking. >> >> I understand that this sparc case is a slow path, but I know that if >> we just apply all of these refcount_t conversions, there will be no >> work done to address the performance issues. > > I think we will have to address the performance problems in places where we can see it matters. > The problem is that so far noone told us how to measure in any reasonable way the overhead neither in networking, not in mm changes. > If this change is a slow path, why would it matter for *this particular patch*? I think not having a way to avoid the functional call makes the facility unusable as a core kernel facility. You can't just say "oh well, just convert the slow paths, we'll solve the fundamental performance issue later". Sorry, that is not how we do things.
diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c index c0765bb..ac3fe0d 100644 --- a/arch/sparc/kernel/mdesc.c +++ b/arch/sparc/kernel/mdesc.c @@ -12,6 +12,7 @@ #include <linux/miscdevice.h> #include <linux/bootmem.h> #include <linux/export.h> +#include <linux/refcount.h> #include <asm/cpudata.h> #include <asm/hypervisor.h> @@ -70,7 +71,7 @@ struct mdesc_handle { struct list_head list; struct mdesc_mem_ops *mops; void *self_base; - atomic_t refcnt; + refcount_t refcnt; unsigned int handle_size; struct mdesc_hdr mdesc; }; @@ -84,7 +85,7 @@ static void mdesc_handle_init(struct mdesc_handle *hp, memset(hp, 0, handle_size); INIT_LIST_HEAD(&hp->list); hp->self_base = base; - atomic_set(&hp->refcnt, 1); + refcount_set(&hp->refcnt, 1); hp->handle_size = handle_size; } @@ -114,7 +115,7 @@ static void __init mdesc_memblock_free(struct mdesc_handle *hp) unsigned int alloc_size; unsigned long start; - BUG_ON(atomic_read(&hp->refcnt) != 0); + BUG_ON(refcount_read(&hp->refcnt) != 0); BUG_ON(!list_empty(&hp->list)); alloc_size = PAGE_ALIGN(hp->handle_size); @@ -154,7 +155,7 @@ static struct mdesc_handle *mdesc_kmalloc(unsigned int mdesc_size) static void mdesc_kfree(struct mdesc_handle *hp) { - BUG_ON(atomic_read(&hp->refcnt) != 0); + BUG_ON(refcount_read(&hp->refcnt) != 0); BUG_ON(!list_empty(&hp->list)); kfree(hp->self_base); @@ -193,7 +194,7 @@ struct mdesc_handle *mdesc_grab(void) spin_lock_irqsave(&mdesc_lock, flags); hp = cur_mdesc; if (hp) - atomic_inc(&hp->refcnt); + refcount_inc(&hp->refcnt); spin_unlock_irqrestore(&mdesc_lock, flags); return hp; @@ -205,7 +206,7 @@ void mdesc_release(struct mdesc_handle *hp) unsigned long flags; spin_lock_irqsave(&mdesc_lock, flags); - if (atomic_dec_and_test(&hp->refcnt)) { + if (refcount_dec_and_test(&hp->refcnt)) { list_del_init(&hp->list); hp->mops->free(hp); } @@ -344,7 +345,7 @@ void mdesc_update(void) if (status != HV_EOK || real_len > len) { printk(KERN_ERR "MD: mdesc reread fails with %lu\n", status); - atomic_dec(&hp->refcnt); + refcount_dec(&hp->refcnt); mdesc_free(hp); goto out; } @@ -357,7 +358,7 @@ void mdesc_update(void) mdesc_notify_clients(orig_hp, hp); spin_lock_irqsave(&mdesc_lock, flags); - if (atomic_dec_and_test(&orig_hp->refcnt)) + if (refcount_dec_and_test(&orig_hp->refcnt)) mdesc_free(orig_hp); else list_add(&orig_hp->list, &mdesc_zombie_list);