Message ID | a0d5b472c0c20ca34755a9636e1494d9b2e27af7.1579274660.git.robin.murphy@arm.com (mailing list archive) |
---|---|
State | Mainlined |
Commit | c2c24edb1d9c308011f5a1328563d8da8c92c849 |
Headers | show |
Series | arm64: csum: Fix pathological zero-length calls | expand |
diff --git a/arch/arm64/lib/csum.c b/arch/arm64/lib/csum.c index 847eb725ce09..1f82c66b32ea 100644 --- a/arch/arm64/lib/csum.c +++ b/arch/arm64/lib/csum.c @@ -20,6 +20,9 @@ unsigned int do_csum(const unsigned char *buff, int len) const u64 *ptr; u64 data, sum64 = 0; + if (unlikely(len == 0)) + return 0; + offset = (unsigned long)buff & 7; /* * This is to all intents and purposes safe, since rounding down cannot
In validating the checksumming results of the new routine, I sadly neglected to test its not-checksumming results. Thus it slipped through that the one case where @buff is already dword-aligned and @len = 0 manages to defeat the tail-masking logic and behave as if @len = 8. For a zero length it doesn't make much sense to deference @buff anyway, so just add an early return (which has essentially zero impact on performance). Signed-off-by: Robin Murphy <robin.murphy@arm.com> --- I'm still trying to make sense of the CKI UDP fails, but I've re-tested against the generic routine and this was the only thing that fell out. arch/arm64/lib/csum.c | 3 +++ 1 file changed, 3 insertions(+)