Message ID | 59ee3289194cd97d70085cce701bc494bfcb4fd2.1615372955.git.gladkov.alexey@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Count rlimits in each user namespace | expand |
On Wed, Mar 10, 2021 at 4:01 AM Alexey Gladkov <gladkov.alexey@gmail.com> wrote: > > > +/* 127: arbitrary random number, small enough to assemble well */ > +#define refcount_zero_or_close_to_overflow(ucounts) \ > + ((unsigned int) atomic_read(&ucounts->count) + 127u <= 127u) > + > +struct ucounts *get_ucounts(struct ucounts *ucounts) > +{ > + if (ucounts) { > + if (refcount_zero_or_close_to_overflow(ucounts)) { > + WARN_ONCE(1, "ucounts: counter has reached its maximum value"); > + return NULL; > + } > + atomic_inc(&ucounts->count); > + } > + return ucounts; Side note: you probably should just make the limit be the "oh, the count overflows into the sign bit". The reason the page cache did that tighter thing is that it actually has _two_ limits: - the "try_get_page()" thing uses the sign bit as a "uhhuh, I've now used up half of the available reference counting bits, and I will refuse to use any more". This is basically your "get_ucounts()" function. It's a "I want a refcount, but I'm willing to deal with failures". - the page cache has a _different_ set of "I need to unconditionally get a refcount, and I can *not* deal with failures". This is basically the traditional "get_page()", which is only used in fairly controlled places, and should never be something that can overflow. And *that* special code then uses that "zero_or_close_to_overflow()" case as a "doing a get_page() in this situation is very very wrong". This is purely a debugging feature used for a VM_BUG_ON() (that has never triggered, as far as I know). For your ucounts situation, you don't have that second case at all, so you have no reason to ever allow the count to even get remotely close to overflowing. A reference count being within 128 counts of overflow (when we're talking a 32-bit count) is basically never a good idea. It means that you are way too close to the limit, and there's a risk that lots of concurrent people all first see an ok value, and then *all* decide to do the increment, and then you're toast. In contrast, if you use the sign bit as a "ok, let's stop incrementing", the fact that your "overflow" test and the increment aren't atomic really isn't a big deal. (And yes, you could use a cmpxchg to *make* the overflow test atomic, but it's often much much more expensive, so..) Linus
On Wed, Mar 10, 2021 at 01:01:28PM +0100, Alexey Gladkov wrote: > The current implementation of the ucounts reference counter requires the > use of spin_lock. We're going to use get_ucounts() in more performance > critical areas like a handling of RLIMIT_SIGPENDING. This really looks like it should be refcount_t. I read the earlier thread[1] on this, and it's not clear to me that this is a "normal" condition. I think there was a bug in that version (This appeared to *instantly* crash at boot with mnt_init() calling alloc_mnt_ns() calling inc_ucount()). The current code looks like just a "regular" reference counter of the allocated struct ucounts. Overflow should be very unexpected, yes? And operating on a "0" ucounts should be a bug too, right? > [...] > +/* 127: arbitrary random number, small enough to assemble well */ > +#define refcount_zero_or_close_to_overflow(ucounts) \ > + ((unsigned int) atomic_read(&ucounts->count) + 127u <= 127u) Regardless, this should absolutely not have "refcount" as a prefix. I realize it's only used here, but that's needlessly confusing with regard to it being atomic_t not refcount_t. > +struct ucounts *get_ucounts(struct ucounts *ucounts) > +{ > + if (ucounts) { > + if (refcount_zero_or_close_to_overflow(ucounts)) { > + WARN_ONCE(1, "ucounts: counter has reached its maximum value"); > + return NULL; > + } > + atomic_inc(&ucounts->count); > + } > + return ucounts; > +} I feel like this should just be: refcount_inc_not_zero(&ucounts->count); Or, to address Linus's comment in the v3 series, change get_ucounts to not return NULL first -- I can't see why that can ever happen in v8. -Kees [1] https://lore.kernel.org/lkml/116c7669744404364651e3b380db2d82bb23f983.1610722473.git.gladkov.alexey@gmail.com/
On Mon, Mar 15, 2021 at 3:03 PM Kees Cook <keescook@chromium.org> wrote: > > On Wed, Mar 10, 2021 at 01:01:28PM +0100, Alexey Gladkov wrote: > > The current implementation of the ucounts reference counter requires the > > use of spin_lock. We're going to use get_ucounts() in more performance > > critical areas like a handling of RLIMIT_SIGPENDING. > > This really looks like it should be refcount_t. No. refcount_t didn't have the capabilities required. It just saturates, and doesn't have the "don't do this" case, which the ucounts case *DOES* have. In other words, refcount_t is entirely misdesigned for this - because it's literally designed for "people can't handle overflow, so we warn and saturate". ucounts can never saturate, because they replace saturation with "don't do that then". In other words, ucounts work like the page counts do (which also don't saturate, they just say "ok, you can't get a reference". I know you are attached to refcounts, but really: they are not only more expensive, THEY LITERALLY DO THE WRONG THING. Linus
On Mon, Mar 15, 2021 at 03:19:17PM -0700, Linus Torvalds wrote: > It just saturates, and doesn't have the "don't do this" case, which > the ucounts case *DOES* have. Right -- I saw that when digging through the thread. I'm honestly curious, though, why did the 0-day bot find a boot crash? (I can't imagine ucounts wrapped in 0.4 seconds.) So it looked like an increment-from-zero case, which seems like it would be a bug? > I know you are attached to refcounts, but really: they are not only > more expensive, THEY LITERALLY DO THE WRONG THING. Heh, right -- I'm not arguing that refcount_t MUST be used, I just didn't see the code path that made them unsuitable: hitting INT_MAX - 128 seems very hard to do. Anyway, I'll go study it more to try to understand what I'm missing.
On Tue, Mar 16, 2021 at 11:49 AM Kees Cook <keescook@chromium.org> wrote: > > Right -- I saw that when digging through the thread. I'm honestly > curious, though, why did the 0-day bot find a boot crash? (I can't > imagine ucounts wrapped in 0.4 seconds.) So it looked like an > increment-from-zero case, which seems like it would be a bug? Agreed. It's almost certainly a bug. Possibly a use-after-free, but more likely just a "this count had never gotten initialized to anything but zero, but is used by the init process (and kernel threads) and will be incremented but never be free'd, so we never noticed" > Heh, right -- I'm not arguing that refcount_t MUST be used, I just didn't > see the code path that made them unsuitable: hitting INT_MAX - 128 seems > very hard to do. Anyway, I'll go study it more to try to understand what > I'm missing. So as you may have seen later in the thread, I don't like the "INT_MAX - 128" as a limit. I think the page count thing does the right thing: it has separate "debug checks" and "limit checks", and the way it's done it never really needs to worry about doing the (often) expensive cmpxchg loop, because the limit check is _so_ far off the final case that we don't care, and the debug checks aren't about races, they are about "uhhuh, yoiu used this wrong". So what the page code does is: - try_get_page() has a limit check _and_ a debug check: (a) the limit check is "you've used up half the refcounts, I'm not giving you any more". (b) the debug check is "you can't get a page that has a zero count or has underflowed". it's not obvious that it has both of those checks, because they are merged into one single WARN_ON_ONCE(), but that's purely for "we actually want that warning for the limit check, because that looks like somebody trying an attack" and it just got combined. So technically, the code really should do page = compound_head(page); /* Debug check for mis-use of the count */ if (WARN_ON_ONCE(page_ref_zero_or_close_to_overflow(page))) return false; /* * Limit check - we're not incrementing the * count (much) past the halfway point */ if (page_ref_count(page) <= 0) return false; /* The actual atomic reference - the above were done "carelessly" */ page_ref_inc(page); return true; because the "oh, we're not allowing you this ref" is not _technically_ wrong, it's just traditionally wrong, if you see what I mean. and notice how none of the above really cares about the "page_ref_inc()" itself being atomic wrt the checks. It's ok if we race, and the page ref goes a bit above the half-way point. You can't race _so_ much that you actually overflow, because our limit check is _so_ far away from the overflow area that it's not an issue. And similarly, the debug check with page_ref_zero_or_close_to_overflow() is one of those things that are trying to see underflows or bad use-cases, and trying to do that atomically with the actual ref update doesn't really help. The underfulow or mis-use will have happened before we increment the page count. So the above is very close to what the ucounts code I think really wants to do: the "zero_or_close_to_overflow" is an error case: it means something just underflowed, or you were trying to increment a ref to something you didn't have a reference to in the first place. And the "<= 0" check is just the cheap test for "I'm giving you at most half the counter space, because I don't want to have to even remotely worry about overflow". Note that the above very intentionally does allow the "we can go over the limit" case for another reason: we still have that regular *unconditional* get_page(), that has a "I absolutely need a temporary ref to this page, but I know it's not some long-term thing that a user can force". That's not only our traditional model, but it's something that some kernel code simply does need, so it's a good feature in itself. That might be less of an issue for ucounts, but for pages, we somethines do have "I need to take a ref to this page just for my own use while I then drop the page lock and do something else". The "put_page()" case then has its own debug check (in "put_page_testzero()") which says "hey, you can't put a page that has no refcount. Thct could could easily use that "zero_or_close_to_overflow(()" rule too, but if you actually do underflow for real, you'll see the zero (again - races aren't really important because even if you have some attack vector that depends on the race, such attack vectors will also have to depend on doing the thing over and over and over again until it successfully hits the race, so you'll see the zero case in practice, and trying to be "atomic" for debug testing is thus pointless. So I do think out page counting this is actually pretty good. And it's possible that "refcount_t" could use that exact same model, and actually then offer that option that ucounts wants, of a "try to get a refcount, but if we have too many refcounts, then never mind, I can just return an error to user space instead". Hmm? On x86 (and honestly, these days on arm too with the new atomics), it's generally quite a bit cheaper to do an atomic increment/decrement than it is to do a cmpxchg loop. That seems to become even more true as microarchitectures optimize those atomics - apparently AMD actually does regular locked ops by doing them optimistically out-of-order, and verifying that the serialization requirements hold after-the-fact. So plain simple locked ops that historically used to be quite expensive are getting less so (because they've obviously gotten much more important over the years). Linus
On Tue, Mar 16, 2021 at 12:26:05PM -0700, Linus Torvalds wrote: > Note that the above very intentionally does allow the "we can go over > the limit" case for another reason: we still have that regular > *unconditional* get_page(), that has a "I absolutely need a temporary > ref to this page, but I know it's not some long-term thing that a user > can force". That's not only our traditional model, but it's something > that some kernel code simply does need, so it's a good feature in > itself. That might be less of an issue for ucounts, but for pages, we > somethines do have "I need to take a ref to this page just for my own > use while I then drop the page lock and do something else". Right, get_page() has a whole other set of requirements. :) I just couldn't find the "we _must_ to get a reference to ucounts" code path, so I was scratching my head. > And it's possible that "refcount_t" could use that exact same model, > and actually then offer that option that ucounts wants, of a "try to > get a refcount, but if we have too many refcounts, then never mind, I > can just return an error to user space instead". Yeah, if there starts to be more of these cases, I think it'd be a nice addition. And with the recent performance work Will Deacon did on refcount_t, I think any general performance concerns are met now. But I'd love to just leave refcount_t alone until we can really show a need for an API change. :P
diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h index f71b5a4a3e74..d84cc2c0b443 100644 --- a/include/linux/user_namespace.h +++ b/include/linux/user_namespace.h @@ -92,7 +92,7 @@ struct ucounts { struct hlist_node node; struct user_namespace *ns; kuid_t uid; - int count; + atomic_t count; atomic_long_t ucount[UCOUNT_COUNTS]; }; @@ -104,7 +104,7 @@ void retire_userns_sysctls(struct user_namespace *ns); struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid, enum ucount_type type); void dec_ucount(struct ucounts *ucounts, enum ucount_type type); struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid); -struct ucounts *get_ucounts(struct ucounts *ucounts); +struct ucounts * __must_check get_ucounts(struct ucounts *ucounts); void put_ucounts(struct ucounts *ucounts); #ifdef CONFIG_USER_NS diff --git a/kernel/ucount.c b/kernel/ucount.c index 50cc1dfb7d28..bb3203039b5e 100644 --- a/kernel/ucount.c +++ b/kernel/ucount.c @@ -11,7 +11,7 @@ struct ucounts init_ucounts = { .ns = &init_user_ns, .uid = GLOBAL_ROOT_UID, - .count = 1, + .count = ATOMIC_INIT(1), }; #define UCOUNTS_HASHTABLE_BITS 10 @@ -139,6 +139,22 @@ static void hlist_add_ucounts(struct ucounts *ucounts) spin_unlock_irq(&ucounts_lock); } +/* 127: arbitrary random number, small enough to assemble well */ +#define refcount_zero_or_close_to_overflow(ucounts) \ + ((unsigned int) atomic_read(&ucounts->count) + 127u <= 127u) + +struct ucounts *get_ucounts(struct ucounts *ucounts) +{ + if (ucounts) { + if (refcount_zero_or_close_to_overflow(ucounts)) { + WARN_ONCE(1, "ucounts: counter has reached its maximum value"); + return NULL; + } + atomic_inc(&ucounts->count); + } + return ucounts; +} + struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid) { struct hlist_head *hashent = ucounts_hashentry(ns, uid); @@ -155,7 +171,7 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid) new->ns = ns; new->uid = uid; - new->count = 0; + atomic_set(&new->count, 1); spin_lock_irq(&ucounts_lock); ucounts = find_ucounts(ns, uid, hashent); @@ -163,33 +179,12 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid) kfree(new); } else { hlist_add_head(&new->node, hashent); - ucounts = new; + spin_unlock_irq(&ucounts_lock); + return new; } } - if (ucounts->count == INT_MAX) - ucounts = NULL; - else - ucounts->count += 1; spin_unlock_irq(&ucounts_lock); - return ucounts; -} - -struct ucounts *get_ucounts(struct ucounts *ucounts) -{ - unsigned long flags; - - if (!ucounts) - return NULL; - - spin_lock_irqsave(&ucounts_lock, flags); - if (ucounts->count == INT_MAX) { - WARN_ONCE(1, "ucounts: counter has reached its maximum value"); - ucounts = NULL; - } else { - ucounts->count += 1; - } - spin_unlock_irqrestore(&ucounts_lock, flags); - + ucounts = get_ucounts(ucounts); return ucounts; } @@ -197,15 +192,12 @@ void put_ucounts(struct ucounts *ucounts) { unsigned long flags; - spin_lock_irqsave(&ucounts_lock, flags); - ucounts->count -= 1; - if (!ucounts->count) + if (atomic_dec_and_test(&ucounts->count)) { + spin_lock_irqsave(&ucounts_lock, flags); hlist_del_init(&ucounts->node); - else - ucounts = NULL; - spin_unlock_irqrestore(&ucounts_lock, flags); - - kfree(ucounts); + spin_unlock_irqrestore(&ucounts_lock, flags); + kfree(ucounts); + } } static inline bool atomic_long_inc_below(atomic_long_t *v, int u)
The current implementation of the ucounts reference counter requires the use of spin_lock. We're going to use get_ucounts() in more performance critical areas like a handling of RLIMIT_SIGPENDING. Now we need to use spin_lock only if we want to change the hashtable. Signed-off-by: Alexey Gladkov <gladkov.alexey@gmail.com> --- include/linux/user_namespace.h | 4 +-- kernel/ucount.c | 60 +++++++++++++++------------------- 2 files changed, 28 insertions(+), 36 deletions(-)