Message ID | 20240424213019.make.366-kees@kernel.org (mailing list archive) |
---|---|
Headers | show |
Series | slab: Introduce dedicated bucket allocator | expand |
On 4/24/24 23:40, Kees Cook wrote: > Hi, > > Series change history: > > v3: > - clarify rationale and purpose in commit log > - rebase to -next (CONFIG_CODE_TAGGING) > - simplify calling styles and split out bucket plumbing more cleanly > - consolidate kmem_buckets_*() family introduction patches > v2: https://lore.kernel.org/lkml/20240305100933.it.923-kees@kernel.org/ > v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@kernel.org/ > > For the cover letter, I'm repeating commit log for patch 4 here, which has > additional clarifications and rationale since v2: > > Dedicated caches are available for fixed size allocations via > kmem_cache_alloc(), but for dynamically sized allocations there is only > the global kmalloc API's set of buckets available. This means it isn't > possible to separate specific sets of dynamically sized allocations into > a separate collection of caches. > > This leads to a use-after-free exploitation weakness in the Linux > kernel since many heap memory spraying/grooming attacks depend on using > userspace-controllable dynamically sized allocations to collide with > fixed size allocations that end up in same cache. > > While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense > against these kinds of "type confusion" attacks, including for fixed > same-size heap objects, we can create a complementary deterministic > defense for dynamically sized allocations that are directly user > controlled. Addressing these cases is limited in scope, so isolation these > kinds of interfaces will not become an unbounded game of whack-a-mole. For > example, pass through memdup_user(), making isolation there very > effective. What does "Addressing these cases is limited in scope, so isolation these kinds of interfaces will not become an unbounded game of whack-a-mole." mean exactly? > > In order to isolate user-controllable sized allocations from system > allocations, introduce kmem_buckets_create(), which behaves like > kmem_cache_create(). Introduce kmem_buckets_alloc(), which behaves like > kmem_cache_alloc(). Introduce kmem_buckets_alloc_track_caller() for > where caller tracking is needed. Introduce kmem_buckets_valloc() for > cases where vmalloc callback is needed. > > Allows for confining allocations to a dedicated set of sized caches > (which have the same layout as the kmalloc caches). > > This can also be used in the future to extend codetag allocation > annotations to implement per-caller allocation cache isolation[1] even > for dynamic allocations. Having per-caller allocation cache isolation looks like something that has already been done in https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3c6152940584290668b35fa0800026f6a1ae05fe albeit in a randomized way. Why not piggy-back on the infra added by this patch, instead of adding a new API? > Memory allocation pinning[2] is still needed to plug the Use-After-Free > cross-allocator weakness, but that is an existing and separate issue > which is complementary to this improvement. Development continues for > that feature via the SLAB_VIRTUAL[3] series (which could also provide > guard pages -- another complementary improvement). > > Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1] > Link: https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2] > Link: https://lore.kernel.org/lkml/20230915105933.495735-1-matteorizzo@google.com/ [3] To be honest, I think this series is close to useless without allocation pinning. And even with pinning, it's still routinely bypassed in the KernelCTF (https://github.com/google/security-research/tree/master/pocs/linux/kernelctf). Do you have some particular exploits in mind that would be completely mitigated by your series? Moreover, I'm not aware of any ongoing development of the SLAB_VIRTUAL series: the last sign of life on its thread is from 7 months ago. > > After the core implementation are 2 patches that cover the most heavily > abused "repeat offenders" used in exploits. Repeating those details here: > > The msg subsystem is a common target for exploiting[1][2][3][4][5][6] > use-after-free type confusion flaws in the kernel for both read and > write primitives. Avoid having a user-controlled size cache share the > global kmalloc allocator by using a separate set of kmalloc buckets. > > Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1] > Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2] > Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3] > Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4] > Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5] > Link: https://zplin.me/papers/ELOISE.pdf [6] > Link: https://syst3mfailure.io/wall-of-perdition/ [7] > > Both memdup_user() and vmemdup_user() handle allocations that are > regularly used for exploiting use-after-free type confusion flaws in > the kernel (e.g. prctl() PR_SET_VMA_ANON_NAME[1] and setxattr[2][3][4] > respectively). > > Since both are designed for contents coming from userspace, it allows > for userspace-controlled allocation sizes. Use a dedicated set of kmalloc > buckets so these allocations do not share caches with the global kmalloc > buckets. > > Link: https://starlabs.sg/blog/2023/07-prctl-anon_vma_name-an-amusing-heap-spray/ [1] > Link: https://duasynt.com/blog/linux-kernel-heap-spray [2] > Link: https://etenal.me/archives/1336 [3] > Link: https://github.com/a13xp0p0v/kernel-hack-drill/blob/master/drill_exploit_uaf.c [4] What's the performance impact of this series? Did you run some benchmarks? > > Thanks! > > -Kees > > > Kees Cook (6): > mm/slab: Introduce kmem_buckets typedef > mm/slab: Plumb kmem_buckets into __do_kmalloc_node() > mm/slab: Introduce __kvmalloc_node() that can take kmem_buckets > argument > mm/slab: Introduce kmem_buckets_create() and family > ipc, msg: Use dedicated slab buckets for alloc_msg() > mm/util: Use dedicated slab buckets for memdup_user() > > include/linux/slab.h | 44 ++++++++++++++++-------- > ipc/msgutil.c | 13 +++++++- > lib/fortify_kunit.c | 2 +- > lib/rhashtable.c | 2 +- > mm/slab.h | 6 ++-- > mm/slab_common.c | 79 +++++++++++++++++++++++++++++++++++++++++--- > mm/slub.c | 14 ++++---- > mm/util.c | 21 +++++++++--- > 8 files changed, 146 insertions(+), 35 deletions(-) >
On Sun, Apr 28, 2024 at 01:02:36PM +0200, jvoisin wrote: > On 4/24/24 23:40, Kees Cook wrote: > > Hi, > > > > Series change history: > > > > v3: > > - clarify rationale and purpose in commit log > > - rebase to -next (CONFIG_CODE_TAGGING) > > - simplify calling styles and split out bucket plumbing more cleanly > > - consolidate kmem_buckets_*() family introduction patches > > v2: https://lore.kernel.org/lkml/20240305100933.it.923-kees@kernel.org/ > > v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@kernel.org/ > > > > For the cover letter, I'm repeating commit log for patch 4 here, which has > > additional clarifications and rationale since v2: > > > > Dedicated caches are available for fixed size allocations via > > kmem_cache_alloc(), but for dynamically sized allocations there is only > > the global kmalloc API's set of buckets available. This means it isn't > > possible to separate specific sets of dynamically sized allocations into > > a separate collection of caches. > > > > This leads to a use-after-free exploitation weakness in the Linux > > kernel since many heap memory spraying/grooming attacks depend on using > > userspace-controllable dynamically sized allocations to collide with > > fixed size allocations that end up in same cache. > > > > While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense > > against these kinds of "type confusion" attacks, including for fixed > > same-size heap objects, we can create a complementary deterministic > > defense for dynamically sized allocations that are directly user > > controlled. Addressing these cases is limited in scope, so isolation these > > kinds of interfaces will not become an unbounded game of whack-a-mole. For > > example, pass through memdup_user(), making isolation there very > > effective. > > What does "Addressing these cases is limited in scope, so isolation > these kinds of interfaces will not become an unbounded game of > whack-a-mole." mean exactly? The number of cases where there is a user/kernel API for size-controlled allocations is limited. They don't get added very often, and most are (correctly) using kmemdup_user() as the basis of their allocations. This means we have a relatively well defined set of criteria for finding places where this is needed, and most newly added interfaces will use the existing (kmemdup_user()) infrastructure that will already be covered. > > In order to isolate user-controllable sized allocations from system > > allocations, introduce kmem_buckets_create(), which behaves like > > kmem_cache_create(). Introduce kmem_buckets_alloc(), which behaves like > > kmem_cache_alloc(). Introduce kmem_buckets_alloc_track_caller() for > > where caller tracking is needed. Introduce kmem_buckets_valloc() for > > cases where vmalloc callback is needed. > > > > Allows for confining allocations to a dedicated set of sized caches > > (which have the same layout as the kmalloc caches). > > > > This can also be used in the future to extend codetag allocation > > annotations to implement per-caller allocation cache isolation[1] even > > for dynamic allocations. > Having per-caller allocation cache isolation looks like something that > has already been done in > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3c6152940584290668b35fa0800026f6a1ae05fe > albeit in a randomized way. Why not piggy-back on the infra added by > this patch, instead of adding a new API? It's not sufficient because it is a static set of buckets. It cannot be adjusted dynamically (which is not a problem kmem_buckets_create() has). I had asked[1], in an earlier version of CONFIG_RANDOM_KMALLOC_CACHES, for exactly the API that is provided in this series, because that would be much more flexible. And for systems that use allocation profiling, the next step would be to provide per-call-site isolation (which would supersede CONFIG_RANDOM_KMALLOC_CACHES, which we'd keep for the non-alloc-prof cases). > > Memory allocation pinning[2] is still needed to plug the Use-After-Free > > cross-allocator weakness, but that is an existing and separate issue > > which is complementary to this improvement. Development continues for > > that feature via the SLAB_VIRTUAL[3] series (which could also provide > > guard pages -- another complementary improvement). > > > > Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1] > > Link: https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2] > > Link: https://lore.kernel.org/lkml/20230915105933.495735-1-matteorizzo@google.com/ [3] > > To be honest, I think this series is close to useless without allocation > pinning. And even with pinning, it's still routinely bypassed in the > KernelCTF > (https://github.com/google/security-research/tree/master/pocs/linux/kernelctf). Sure, I can understand why you might think that, but I disagree. This adds the building blocks we need for better allocation isolation control, and stops existing (and similar) attacks today. But yes, given attackers with sufficient control over the entire system, all mitigations get weaker. We can't fall into the trap of "perfect security"; real-world experience shows that incremental improvements like this can strongly impact the difficulty of mounting attacks. Not all flaws are created equal; not everything is exploitable to the same degree. > Do you have some particular exploits in mind that would be completely > mitigated by your series? I link to like a dozen in the last two patches. :P This series immediately closes 3 well used exploit methodologies. Attackers exploiting new flaws that could have used the killed methods must now choose methods that have greater complexity, and this drives them towards cross-allocator attacks. Robust exploits there are more costly to develop as we narrow the scope of methods. Bad analogy: we're locking the doors of a house. Yes, some windows may still be unlocked, but now they'll need a ladder. And it doesn't make sense to lock the windows if we didn't lock the doors first. This is what I mean by complementary defenses, and comes back to what I mentioned earlier: "perfect security" is a myth, but incremental security works. > Moreover, I'm not aware of any ongoing development of the SLAB_VIRTUAL > series: the last sign of life on its thread is from 7 months ago. Yeah, I know, but sometimes other things get in the way. Matteo assures me it's still coming. Since you're interested in seeing SLAB_VIRTUAL land, please join the development efforts. Reach out to Matteo (you, he, and I all work for the same company) and see where you can assist. Surely this can be something you can contribute to while "on the clock"? > > After the core implementation are 2 patches that cover the most heavily > > abused "repeat offenders" used in exploits. Repeating those details here: > > > > The msg subsystem is a common target for exploiting[1][2][3][4][5][6] > > use-after-free type confusion flaws in the kernel for both read and > > write primitives. Avoid having a user-controlled size cache share the > > global kmalloc allocator by using a separate set of kmalloc buckets. > > > > Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1] > > Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2] > > Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3] > > Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4] > > Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5] > > Link: https://zplin.me/papers/ELOISE.pdf [6] > > Link: https://syst3mfailure.io/wall-of-perdition/ [7] > > > > Both memdup_user() and vmemdup_user() handle allocations that are > > regularly used for exploiting use-after-free type confusion flaws in > > the kernel (e.g. prctl() PR_SET_VMA_ANON_NAME[1] and setxattr[2][3][4] > > respectively). > > > > Since both are designed for contents coming from userspace, it allows > > for userspace-controlled allocation sizes. Use a dedicated set of kmalloc > > buckets so these allocations do not share caches with the global kmalloc > > buckets. > > > > Link: https://starlabs.sg/blog/2023/07-prctl-anon_vma_name-an-amusing-heap-spray/ [1] > > Link: https://duasynt.com/blog/linux-kernel-heap-spray [2] > > Link: https://etenal.me/archives/1336 [3] > > Link: https://github.com/a13xp0p0v/kernel-hack-drill/blob/master/drill_exploit_uaf.c [4] > > What's the performance impact of this series? Did you run some benchmarks? I wasn't able to measure any performance impact at all. It does add a small bit of memory overhead, but it's on the order of a dozen pages used for the 2 extra sets of buckets. (E.g. it's well below the overhead introduced by CONFIG_RANDOM_KMALLOC_CACHES, which adds 16 extra sets of buckets.) -Kees [1] https://lore.kernel.org/lkml/202305161204.CB4A87C13@keescook/
On 4/28/24 19:02, Kees Cook wrote: > On Sun, Apr 28, 2024 at 01:02:36PM +0200, jvoisin wrote: >> On 4/24/24 23:40, Kees Cook wrote: >>> Hi, >>> >>> Series change history: >>> >>> v3: >>> - clarify rationale and purpose in commit log >>> - rebase to -next (CONFIG_CODE_TAGGING) >>> - simplify calling styles and split out bucket plumbing more cleanly >>> - consolidate kmem_buckets_*() family introduction patches >>> v2: https://lore.kernel.org/lkml/20240305100933.it.923-kees@kernel.org/ >>> v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@kernel.org/ >>> >>> For the cover letter, I'm repeating commit log for patch 4 here, which has >>> additional clarifications and rationale since v2: >>> >>> Dedicated caches are available for fixed size allocations via >>> kmem_cache_alloc(), but for dynamically sized allocations there is only >>> the global kmalloc API's set of buckets available. This means it isn't >>> possible to separate specific sets of dynamically sized allocations into >>> a separate collection of caches. >>> >>> This leads to a use-after-free exploitation weakness in the Linux >>> kernel since many heap memory spraying/grooming attacks depend on using >>> userspace-controllable dynamically sized allocations to collide with >>> fixed size allocations that end up in same cache. >>> >>> While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense >>> against these kinds of "type confusion" attacks, including for fixed >>> same-size heap objects, we can create a complementary deterministic >>> defense for dynamically sized allocations that are directly user >>> controlled. Addressing these cases is limited in scope, so isolation these >>> kinds of interfaces will not become an unbounded game of whack-a-mole. For >>> example, pass through memdup_user(), making isolation there very >>> effective. >> >> What does "Addressing these cases is limited in scope, so isolation >> these kinds of interfaces will not become an unbounded game of >> whack-a-mole." mean exactly? > > The number of cases where there is a user/kernel API for size-controlled > allocations is limited. They don't get added very often, and most are > (correctly) using kmemdup_user() as the basis of their allocations. This > means we have a relatively well defined set of criteria for finding > places where this is needed, and most newly added interfaces will use > the existing (kmemdup_user()) infrastructure that will already be covered. A simple CodeQL query returns 266 of them: https://lookerstudio.google.com/reporting/68b02863-4f5c-4d85-b3c1-992af89c855c/page/n92nD?params=%7B%22df3%22:%22include%25EE%2580%25803%25EE%2580%2580T%22%7D Is this number realistic and coherent with your results/own analysis? > >>> In order to isolate user-controllable sized allocations from system >>> allocations, introduce kmem_buckets_create(), which behaves like >>> kmem_cache_create(). Introduce kmem_buckets_alloc(), which behaves like >>> kmem_cache_alloc(). Introduce kmem_buckets_alloc_track_caller() for >>> where caller tracking is needed. Introduce kmem_buckets_valloc() for >>> cases where vmalloc callback is needed. >>> >>> Allows for confining allocations to a dedicated set of sized caches >>> (which have the same layout as the kmalloc caches). >>> >>> This can also be used in the future to extend codetag allocation >>> annotations to implement per-caller allocation cache isolation[1] even >>> for dynamic allocations. >> Having per-caller allocation cache isolation looks like something that >> has already been done in >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3c6152940584290668b35fa0800026f6a1ae05fe >> albeit in a randomized way. Why not piggy-back on the infra added by >> this patch, instead of adding a new API? > > It's not sufficient because it is a static set of buckets. It cannot be > adjusted dynamically (which is not a problem kmem_buckets_create() has). > I had asked[1], in an earlier version of CONFIG_RANDOM_KMALLOC_CACHES, for > exactly the API that is provided in this series, because that would be > much more flexible. > > And for systems that use allocation profiling, the next step > would be to provide per-call-site isolation (which would supersede > CONFIG_RANDOM_KMALLOC_CACHES, which we'd keep for the non-alloc-prof > cases). > >>> Memory allocation pinning[2] is still needed to plug the Use-After-Free >>> cross-allocator weakness, but that is an existing and separate issue >>> which is complementary to this improvement. Development continues for >>> that feature via the SLAB_VIRTUAL[3] series (which could also provide >>> guard pages -- another complementary improvement). >>> >>> Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1] >>> Link: https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2] >>> Link: https://lore.kernel.org/lkml/20230915105933.495735-1-matteorizzo@google.com/ [3] >> >> To be honest, I think this series is close to useless without allocation >> pinning. And even with pinning, it's still routinely bypassed in the >> KernelCTF >> (https://github.com/google/security-research/tree/master/pocs/linux/kernelctf). > > Sure, I can understand why you might think that, but I disagree. This > adds the building blocks we need for better allocation isolation > control, and stops existing (and similar) attacks toda> > But yes, given attackers with sufficient control over the entire system, > all mitigations get weaker. We can't fall into the trap of "perfect > security"; real-world experience shows that incremental improvements > like this can strongly impact the difficulty of mounting attacks. Not > all flaws are created equal; not everything is exploitable to the same > degree. It's not about "perfect security", but about wisely spending the complexity/review/performance/churn/… budgets in my opinion. >> Do you have some particular exploits in mind that would be completely >> mitigated by your series? > > I link to like a dozen in the last two patches. :P > > This series immediately closes 3 well used exploit methodologies. > Attackers exploiting new flaws that could have used the killed methods > must now choose methods that have greater complexity, and this drives > them towards cross-allocator attacks. Robust exploits there are more > costly to develop as we narrow the scope of methods. You linked exploits that were making use of the two structures that you isolated; making them use different structures would likely mean a couple of hours. I was more interested in exploits that are effectively killed; as I'm still not convinced that elastic structures are rare, and that manually isolating them one by one is attainable/sustainable/… But if you have some proper analysis in this direction, then yes, I completely agrees that isolating all of them is a great idea. > > Bad analogy: we're locking the doors of a house. Yes, some windows may > still be unlocked, but now they'll need a ladder. And it doesn't make > sense to lock the windows if we didn't lock the doors first. This is > what I mean by complementary defenses, and comes back to what I mentioned > earlier: "perfect security" is a myth, but incremental security works. > >> Moreover, I'm not aware of any ongoing development of the SLAB_VIRTUAL >> series: the last sign of life on its thread is from 7 months ago. > > Yeah, I know, but sometimes other things get in the way. Matteo assures > me it's still coming. > > Since you're interested in seeing SLAB_VIRTUAL land, please join the > development efforts. Reach out to Matteo (you, he, and I all work for > the same company) and see where you can assist. Surely this can be > something you can contribute to while "on the clock"? I left Google a couple of weeks ago unfortunately, and I won't touch anything with email-based development for less than a Google salary :D > >>> After the core implementation are 2 patches that cover the most heavily >>> abused "repeat offenders" used in exploits. Repeating those details here: >>> >>> The msg subsystem is a common target for exploiting[1][2][3][4][5][6] >>> use-after-free type confusion flaws in the kernel for both read and >>> write primitives. Avoid having a user-controlled size cache share the >>> global kmalloc allocator by using a separate set of kmalloc buckets. >>> >>> Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1] >>> Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2] >>> Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3] >>> Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4] >>> Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5] >>> Link: https://zplin.me/papers/ELOISE.pdf [6] >>> Link: https://syst3mfailure.io/wall-of-perdition/ [7] >>> >>> Both memdup_user() and vmemdup_user() handle allocations that are >>> regularly used for exploiting use-after-free type confusion flaws in >>> the kernel (e.g. prctl() PR_SET_VMA_ANON_NAME[1] and setxattr[2][3][4] >>> respectively). >>> >>> Since both are designed for contents coming from userspace, it allows >>> for userspace-controlled allocation sizes. Use a dedicated set of kmalloc >>> buckets so these allocations do not share caches with the global kmalloc >>> buckets. >>> >>> Link: https://starlabs.sg/blog/2023/07-prctl-anon_vma_name-an-amusing-heap-spray/ [1] >>> Link: https://duasynt.com/blog/linux-kernel-heap-spray [2] >>> Link: https://etenal.me/archives/1336 [3] >>> Link: https://github.com/a13xp0p0v/kernel-hack-drill/blob/master/drill_exploit_uaf.c [4] >> >> What's the performance impact of this series? Did you run some benchmarks? > > I wasn't able to measure any performance impact at all. It does add a > small bit of memory overhead, but it's on the order of a dozen pages > used for the 2 extra sets of buckets. (E.g. it's well below the overhead > introduced by CONFIG_RANDOM_KMALLOC_CACHES, which adds 16 extra sets > of buckets.) Nice!
On Fri, May 03, 2024 at 03:39:28PM +0200, jvoisin wrote: > On 4/28/24 19:02, Kees Cook wrote: > > On Sun, Apr 28, 2024 at 01:02:36PM +0200, jvoisin wrote: > >> On 4/24/24 23:40, Kees Cook wrote: > >>> [...] > >>> While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense > >>> against these kinds of "type confusion" attacks, including for fixed > >>> same-size heap objects, we can create a complementary deterministic > >>> defense for dynamically sized allocations that are directly user > >>> controlled. Addressing these cases is limited in scope, so isolation these > >>> kinds of interfaces will not become an unbounded game of whack-a-mole. For > >>> example, pass through memdup_user(), making isolation there very > >>> effective. > >> > >> What does "Addressing these cases is limited in scope, so isolation > >> these kinds of interfaces will not become an unbounded game of > >> whack-a-mole." mean exactly? > > > > The number of cases where there is a user/kernel API for size-controlled > > allocations is limited. They don't get added very often, and most are > > (correctly) using kmemdup_user() as the basis of their allocations. This > > means we have a relatively well defined set of criteria for finding > > places where this is needed, and most newly added interfaces will use > > the existing (kmemdup_user()) infrastructure that will already be covered. > > A simple CodeQL query returns 266 of them: > https://lookerstudio.google.com/reporting/68b02863-4f5c-4d85-b3c1-992af89c855c/page/n92nD?params=%7B%22df3%22:%22include%25EE%2580%25803%25EE%2580%2580T%22%7D These aren't filtered for being long-lived, nor filtered for userspace reachability, nor filtered for userspace size and content controllability. Take for example, cros_ec_get_panicinfo(): the size is controlled by a device, the allocation doesn't last beyond the function, and the function itself is part of device probing. > Is this number realistic and coherent with your results/own analysis? No, I think it's 1 possibly 2 orders of magnitude too high. Thank you for the link, though: we can see what the absolute upper bounds is with it, but that's not an accurate count of cases that would need to explicitly use this bucket API. But even if it did, 300 instances is still small: we converted more VLAs than that, more switch statement fallthroughs than that, and fixed more array bounds cases than that. And, again, while this series does close a bunch of methods today, it's a _prerequisite_ for doing per-call-site allocation isolation, which obviates the need for doing per-site analysis. (We can and will still do such analysis, though, since there's a benefit to it for folks that can't tolerate the expected per-site memory overhead.) > [...] > >>> Memory allocation pinning[2] is still needed to plug the Use-After-Free > >>> cross-allocator weakness, but that is an existing and separate issue > >>> which is complementary to this improvement. Development continues for > >>> that feature via the SLAB_VIRTUAL[3] series (which could also provide > >>> guard pages -- another complementary improvement). > >>> > >>> Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1] > >>> Link: https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2] > >>> Link: https://lore.kernel.org/lkml/20230915105933.495735-1-matteorizzo@google.com/ [3] > >> > >> To be honest, I think this series is close to useless without allocation > >> pinning. And even with pinning, it's still routinely bypassed in the > >> KernelCTF > >> (https://github.com/google/security-research/tree/master/pocs/linux/kernelctf). > > > > Sure, I can understand why you might think that, but I disagree. This > > adds the building blocks we need for better allocation isolation > > control, and stops existing (and similar) attacks toda> > > But yes, given attackers with sufficient control over the entire system, > > all mitigations get weaker. We can't fall into the trap of "perfect > > security"; real-world experience shows that incremental improvements > > like this can strongly impact the difficulty of mounting attacks. Not > > all flaws are created equal; not everything is exploitable to the same > > degree. > > It's not about "perfect security", but about wisely spending the > complexity/review/performance/churn/… budgets in my opinion. Sure, that's an appropriate analysis to make, and it's one I've done. I think this series is well within those budgets: it abstracts the "bucket" system into a distinct object that we've needed to have extracted for other things, it's a pretty trivial review (since the abstraction makes the other patches very straight forward), using the new API is a nearly trivial drop-in replacement, and we immediately closes several glaring exploit techniques, which has real-world impact. This is, IMO, a total slam dunk of a series. > >> Do you have some particular exploits in mind that would be completely > >> mitigated by your series? > > > > I link to like a dozen in the last two patches. :P > > > > This series immediately closes 3 well used exploit methodologies. > > Attackers exploiting new flaws that could have used the killed methods > > must now choose methods that have greater complexity, and this drives > > them towards cross-allocator attacks. Robust exploits there are more > > costly to develop as we narrow the scope of methods. > > You linked exploits that were making use of the two structures that you > isolated; making them use different structures would likely mean a > couple of hours. I think you underestimate what it would take to provide such a flexible replacement. As I noted earlier, the techniques have several requirements: - reachable from userspace - long-lived allocation - userspace controllable size - userspace controllable contents I'm not saying there aren't other interfaces that provide this, but it's not common, and each (like these) will have their own quirks and limitations. (e.g. the msg_msg exploit can't use the start of the allocation since the contents aren't controllable, and has a minimum size for the same reason.) This series kills the 3 techniques with _2_ changes. 2 of the techniques depend on the same internal (memdup_user()) that gets protected, which implies that it will cover other things both now and in the future. > I was more interested in exploits that are effectively killed; as I'm > still not convinced that elastic structures are rare, and that manually > isolating them one by one is attainable/sustainable/… I don't agree with your rarity analysis, but it doesn't matter, because we will be taking the next step and providing per-call-site isolation using this abstraction. > But if you have some proper analysis in this direction, then yes, I > completely agrees that isolating all of them is a great idea. I don't need to perform a complete reachability analysis for all UAPI because I can point to just memdup_user(): it is the recommended way to get long-lived data from userspace. It has been and will be used by interfaces that meet all 4 criteria for the exploit technique. Converting other APIs to it or using the bucket allocation API can happen over time as those are identified. This is standard operating procedure for incremental improvements in Linux. > > Bad analogy: we're locking the doors of a house. Yes, some windows may > > still be unlocked, but now they'll need a ladder. And it doesn't make > > sense to lock the windows if we didn't lock the doors first. This is > > what I mean by complementary defenses, and comes back to what I mentioned > > earlier: "perfect security" is a myth, but incremental security works. > > > >> Moreover, I'm not aware of any ongoing development of the SLAB_VIRTUAL > >> series: the last sign of life on its thread is from 7 months ago. > > > > Yeah, I know, but sometimes other things get in the way. Matteo assures > > me it's still coming. > > > > Since you're interested in seeing SLAB_VIRTUAL land, please join the > > development efforts. Reach out to Matteo (you, he, and I all work for > > the same company) and see where you can assist. Surely this can be > > something you can contribute to while "on the clock"? > > I left Google a couple of weeks ago unfortunately, Ah! Bummer; I didn't see that happen. :( > and I won't touch > anything with email-based development for less than a Google salary :D LOL. Yes, I can understand that. :) I do want to say, though, that objections carry a lot more weight when counter-proposal patches are provided. "This is the way." :P -Kees
On Wed, Apr 24, 2024 at 02:40:57PM -0700, Kees Cook wrote: > Hi, > > Series change history: > > v3: > - clarify rationale and purpose in commit log > - rebase to -next (CONFIG_CODE_TAGGING) > - simplify calling styles and split out bucket plumbing more cleanly > - consolidate kmem_buckets_*() family introduction patches > v2: https://lore.kernel.org/lkml/20240305100933.it.923-kees@kernel.org/ > v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@kernel.org/ > > For the cover letter, I'm repeating commit log for patch 4 here, which has > additional clarifications and rationale since v2: > > Dedicated caches are available for fixed size allocations via > kmem_cache_alloc(), but for dynamically sized allocations there is only > the global kmalloc API's set of buckets available. This means it isn't > possible to separate specific sets of dynamically sized allocations into > a separate collection of caches. > > This leads to a use-after-free exploitation weakness in the Linux > kernel since many heap memory spraying/grooming attacks depend on using > userspace-controllable dynamically sized allocations to collide with > fixed size allocations that end up in same cache. This is going to increase internal fragmentation in the slab allocator, so we're going to need better, more visible numbers on the amount of memory stranded thusly, so users can easily see the effect this has. Please also document this effect and point users in the documentation where to check, so that we devs can get feedback on this.
On Fri, May 24, 2024 at 10:54:58AM -0400, Kent Overstreet wrote: > On Wed, Apr 24, 2024 at 02:40:57PM -0700, Kees Cook wrote: > > Hi, > > > > Series change history: > > > > v3: > > - clarify rationale and purpose in commit log > > - rebase to -next (CONFIG_CODE_TAGGING) > > - simplify calling styles and split out bucket plumbing more cleanly > > - consolidate kmem_buckets_*() family introduction patches > > v2: https://lore.kernel.org/lkml/20240305100933.it.923-kees@kernel.org/ > > v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@kernel.org/ > > > > For the cover letter, I'm repeating commit log for patch 4 here, which has > > additional clarifications and rationale since v2: > > > > Dedicated caches are available for fixed size allocations via > > kmem_cache_alloc(), but for dynamically sized allocations there is only > > the global kmalloc API's set of buckets available. This means it isn't > > possible to separate specific sets of dynamically sized allocations into > > a separate collection of caches. > > > > This leads to a use-after-free exploitation weakness in the Linux > > kernel since many heap memory spraying/grooming attacks depend on using > > userspace-controllable dynamically sized allocations to collide with > > fixed size allocations that end up in same cache. > > This is going to increase internal fragmentation in the slab allocator, > so we're going to need better, more visible numbers on the amount of > memory stranded thusly, so users can easily see the effect this has. Yes, but not significantly. It's less than the 16-buckets randomized kmalloc implementation. The numbers will be visible in /proc/slabinfo just like any other. > Please also document this effect and point users in the documentation > where to check, so that we devs can get feedback on this. Okay, sure. In the commit log, or did you have somewhere else in mind?