Message ID | 20250329000338.1031289-2-pcc@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | string: Add load_unaligned_zeropad() code path to sized_strscpy() | expand |
On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote: > diff --git a/lib/string.c b/lib/string.c > index eb4486ed40d25..b632c71df1a50 100644 > --- a/lib/string.c > +++ b/lib/string.c > @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) > return -E2BIG; > > +#ifndef CONFIG_DCACHE_WORD_ACCESS > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > /* > * If src is unaligned, don't cross a page boundary, > @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > /* If src or dest is unaligned, don't do word-at-a-time. */ > if (((long) dest | (long) src) & (sizeof(long) - 1)) > max = 0; > +#endif > #endif > > /* > - * read_word_at_a_time() below may read uninitialized bytes after the > - * trailing zero and use them in comparisons. Disable this optimization > - * under KMSAN to prevent false positive reports. > + * load_unaligned_zeropad() or read_word_at_a_time() below may read > + * uninitialized bytes after the trailing zero and use them in > + * comparisons. Disable this optimization under KMSAN to prevent > + * false positive reports. > */ > if (IS_ENABLED(CONFIG_KMSAN)) > max = 0; > @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > while (max >= sizeof(unsigned long)) { > unsigned long c, data; > > +#ifdef CONFIG_DCACHE_WORD_ACCESS > + c = load_unaligned_zeropad(src+res); > +#else > c = read_word_at_a_time(src+res); > +#endif > if (has_zero(c, &data, &constants)) { > data = prep_zero_mask(c, data, &constants); > data = create_zero_mask(data); Kees mentioned the scenario where this crosses the page boundary and we pad the source with zeros. It's probably fine but there are 70+ cases where the strscpy() return value is checked, I only looked at a couple. Could we at least preserve the behaviour with regards to page boundaries and keep the existing 'max' limiting logic? If I read the code correctly, a fall back to reading one byte at a time from an unmapped page would panic. We also get this behaviour if src[0] is reading from an invalid address, though for arm64 the panic would be in ex_handler_load_unaligned_zeropad() when count >= 8. Reading across tag granule (but not across page boundary) and causing a tag check fault would result in padding but we can live with this and only architectures that do MTE-style tag checking would get the new behaviour. What I haven't checked is whether a tag check fault in ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for MTE (it would be a second tag check fault while processing the first). At a quick look, it seems ok but it might be worth checking.
On Wed, Apr 2, 2025 at 1:10 PM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote: > > diff --git a/lib/string.c b/lib/string.c > > index eb4486ed40d25..b632c71df1a50 100644 > > --- a/lib/string.c > > +++ b/lib/string.c > > @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) > > return -E2BIG; > > > > +#ifndef CONFIG_DCACHE_WORD_ACCESS > > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > > /* > > * If src is unaligned, don't cross a page boundary, > > @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > /* If src or dest is unaligned, don't do word-at-a-time. */ > > if (((long) dest | (long) src) & (sizeof(long) - 1)) > > max = 0; > > +#endif > > #endif > > > > /* > > - * read_word_at_a_time() below may read uninitialized bytes after the > > - * trailing zero and use them in comparisons. Disable this optimization > > - * under KMSAN to prevent false positive reports. > > + * load_unaligned_zeropad() or read_word_at_a_time() below may read > > + * uninitialized bytes after the trailing zero and use them in > > + * comparisons. Disable this optimization under KMSAN to prevent > > + * false positive reports. > > */ > > if (IS_ENABLED(CONFIG_KMSAN)) > > max = 0; > > @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > while (max >= sizeof(unsigned long)) { > > unsigned long c, data; > > > > +#ifdef CONFIG_DCACHE_WORD_ACCESS > > + c = load_unaligned_zeropad(src+res); > > +#else > > c = read_word_at_a_time(src+res); > > +#endif > > if (has_zero(c, &data, &constants)) { > > data = prep_zero_mask(c, data, &constants); > > data = create_zero_mask(data); > > Kees mentioned the scenario where this crosses the page boundary and we > pad the source with zeros. It's probably fine but there are 70+ cases > where the strscpy() return value is checked, I only looked at a couple. The return value is the same with/without the patch, it's the number of bytes copied before the null terminator (i.e. not including the extra nulls now written). > Could we at least preserve the behaviour with regards to page boundaries > and keep the existing 'max' limiting logic? If I read the code > correctly, a fall back to reading one byte at a time from an unmapped > page would panic. We also get this behaviour if src[0] is reading from > an invalid address, though for arm64 the panic would be in > ex_handler_load_unaligned_zeropad() when count >= 8. So do you think that the code should continue to panic if the source string is unterminated because of a page boundary? I don't have a strong opinion but maybe that's something that we should only do if some error checking option is turned on? > Reading across tag granule (but not across page boundary) and causing a > tag check fault would result in padding but we can live with this and > only architectures that do MTE-style tag checking would get the new > behaviour. By "padding" do you mean the extra (up to sizeof(unsigned long)) nulls now written to the destination? It seems unlikely that code would deliberately depend on the nulls not being written, the number of nulls written is not part of the documented interface contract and will vary right now depending on how close the source string is to a page boundary. If code is accidentally depending on nulls not being written, that's almost certainly a bug anyway (because of the page boundary thing) and we should fix it if discovered by this change. > What I haven't checked is whether a tag check fault in > ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for > MTE (it would be a second tag check fault while processing the first). > At a quick look, it seems ok but it might be worth checking. Yes, that works, and I added a test case for that in v5. The stack trace looks like this: [ 21.969736] Call trace: [ 21.969739] show_stack+0x18/0x24 (C) [ 21.969756] __dump_stack+0x28/0x38 [ 21.969764] dump_stack_lvl+0x54/0x6c [ 21.969770] print_address_description+0x7c/0x274 [ 21.969780] print_report+0x90/0xe8 [ 21.969789] kasan_report+0xf0/0x150 [ 21.969799] __do_kernel_fault+0x5c/0x1cc [ 21.969808] do_bad_area+0x30/0xec [ 21.969816] do_tag_check_fault+0x20/0x30 [ 21.969824] do_mem_abort+0x3c/0x8c [ 21.969832] el1_abort+0x3c/0x5c [ 21.969840] el1h_64_sync_handler+0x50/0xcc [ 21.969847] el1h_64_sync+0x6c/0x70 [ 21.969854] fixup_exception+0xb0/0xe4 (P) [ 21.969865] __do_kernel_fault+0x80/0x1cc [ 21.969873] do_bad_area+0x30/0xec [ 21.969881] do_tag_check_fault+0x20/0x30 [ 21.969889] do_mem_abort+0x3c/0x8c [ 21.969896] el1_abort+0x3c/0x5c [ 21.969905] el1h_64_sync_handler+0x50/0xcc [ 21.969912] el1h_64_sync+0x6c/0x70 [ 21.969917] sized_strscpy+0x30/0x114 (P) [ 21.969929] kunit_try_run_case+0x64/0x160 [ 21.969939] kunit_generic_run_threadfn_adapter+0x28/0x4c [ 21.969950] kthread+0x1c4/0x208 [ 21.969956] ret_from_fork+0x10/0x20 Peter
On Wed, Apr 02, 2025 at 05:08:51PM -0700, Peter Collingbourne wrote: > On Wed, Apr 2, 2025 at 1:10 PM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote: > > > diff --git a/lib/string.c b/lib/string.c > > > index eb4486ed40d25..b632c71df1a50 100644 > > > --- a/lib/string.c > > > +++ b/lib/string.c > > > @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > > if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) > > > return -E2BIG; > > > > > > +#ifndef CONFIG_DCACHE_WORD_ACCESS > > > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > > > /* > > > * If src is unaligned, don't cross a page boundary, > > > @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > > /* If src or dest is unaligned, don't do word-at-a-time. */ > > > if (((long) dest | (long) src) & (sizeof(long) - 1)) > > > max = 0; > > > +#endif > > > #endif > > > > > > /* > > > - * read_word_at_a_time() below may read uninitialized bytes after the > > > - * trailing zero and use them in comparisons. Disable this optimization > > > - * under KMSAN to prevent false positive reports. > > > + * load_unaligned_zeropad() or read_word_at_a_time() below may read > > > + * uninitialized bytes after the trailing zero and use them in > > > + * comparisons. Disable this optimization under KMSAN to prevent > > > + * false positive reports. > > > */ > > > if (IS_ENABLED(CONFIG_KMSAN)) > > > max = 0; > > > @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > > while (max >= sizeof(unsigned long)) { > > > unsigned long c, data; > > > > > > +#ifdef CONFIG_DCACHE_WORD_ACCESS > > > + c = load_unaligned_zeropad(src+res); > > > +#else > > > c = read_word_at_a_time(src+res); > > > +#endif > > > if (has_zero(c, &data, &constants)) { > > > data = prep_zero_mask(c, data, &constants); > > > data = create_zero_mask(data); > > > > Kees mentioned the scenario where this crosses the page boundary and we > > pad the source with zeros. It's probably fine but there are 70+ cases > > where the strscpy() return value is checked, I only looked at a couple. > > The return value is the same with/without the patch, it's the number > of bytes copied before the null terminator (i.e. not including the > extra nulls now written). I was thinking of the -E2BIG return but you are right, the patch wouldn't change this. If, for example, you read 8 bytes across a page boundary and it faults, load_unaligned_zeropad() returns fewer characters copied, implying the source was null-terminated. read_word_at_a_time(), OTOH, panics in the next byte-at-a-time loop. But it wouldn't return -E2BIG either, so it doesn't matter for the caller. > > Could we at least preserve the behaviour with regards to page boundaries > > and keep the existing 'max' limiting logic? If I read the code > > correctly, a fall back to reading one byte at a time from an unmapped > > page would panic. We also get this behaviour if src[0] is reading from > > an invalid address, though for arm64 the panic would be in > > ex_handler_load_unaligned_zeropad() when count >= 8. > > So do you think that the code should continue to panic if the source > string is unterminated because of a page boundary? I don't have a > strong opinion but maybe that's something that we should only do if > some error checking option is turned on? It's mostly about keeping the current behaviour w.r.t. page boundaries. Not a strong opinion either. The change would be to not read across page boundaries. > > Reading across tag granule (but not across page boundary) and causing a > > tag check fault would result in padding but we can live with this and > > only architectures that do MTE-style tag checking would get the new > > behaviour. > > By "padding" do you mean the extra (up to sizeof(unsigned long)) nulls > now written to the destination? No, I meant the padding of the source when a fault occurs. The write to the destination would only be a single '\0' byte. It's the destination safe termination vs. panic above. > > What I haven't checked is whether a tag check fault in > > ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for > > MTE (it would be a second tag check fault while processing the first). > > At a quick look, it seems ok but it might be worth checking. > > Yes, that works, and I added a test case for that in v5. The stack > trace looks like this: Thanks for checking.
diff --git a/lib/string.c b/lib/string.c index eb4486ed40d25..b632c71df1a50 100644 --- a/lib/string.c +++ b/lib/string.c @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) return -E2BIG; +#ifndef CONFIG_DCACHE_WORD_ACCESS #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS /* * If src is unaligned, don't cross a page boundary, @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) /* If src or dest is unaligned, don't do word-at-a-time. */ if (((long) dest | (long) src) & (sizeof(long) - 1)) max = 0; +#endif #endif /* - * read_word_at_a_time() below may read uninitialized bytes after the - * trailing zero and use them in comparisons. Disable this optimization - * under KMSAN to prevent false positive reports. + * load_unaligned_zeropad() or read_word_at_a_time() below may read + * uninitialized bytes after the trailing zero and use them in + * comparisons. Disable this optimization under KMSAN to prevent + * false positive reports. */ if (IS_ENABLED(CONFIG_KMSAN)) max = 0; @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) while (max >= sizeof(unsigned long)) { unsigned long c, data; +#ifdef CONFIG_DCACHE_WORD_ACCESS + c = load_unaligned_zeropad(src+res); +#else c = read_word_at_a_time(src+res); +#endif if (has_zero(c, &data, &constants)) { data = prep_zero_mask(c, data, &constants); data = create_zero_mask(data);
The call to read_word_at_a_time() in sized_strscpy() is problematic with MTE because it may trigger a tag check fault when reading across a tag granule (16 bytes) boundary. To make this code MTE compatible, let's start using load_unaligned_zeropad() on architectures where it is available (i.e. architectures that define CONFIG_DCACHE_WORD_ACCESS). Because load_unaligned_zeropad() takes care of page boundaries as well as tag granule boundaries, also disable the code preventing crossing page boundaries when using load_unaligned_zeropad(). Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/If4b22e43b5a4ca49726b4bf98ada827fdf755548 Fixes: 94ab5b61ee16 ("kasan, arm64: enable CONFIG_KASAN_HW_TAGS") Cc: stable@vger.kernel.org --- v2: - new approach lib/string.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)