diff mbox

[net-next,2/2] xfrm: Fix unaligned access in xfrm_notify_sa() for DELSA

Message ID 20151021123628.GP6948@oracle.com (mailing list archive)
State Not Applicable
Delegated to: Herbert Xu
Headers show

Commit Message

Sowmini Varadhan Oct. 21, 2015, 12:36 p.m. UTC
On (10/21/15 06:54), Sowmini Varadhan wrote:
> But __alignof__(*p) is 8 on sparc, and without the patch I get
> all types of unaligned access. So what do you suggest as the fix?

Even though the alignment is, in fact, 8 (and that comes from
struct xfrm_lifetime_cfg), if uspace is firmly attached to the 4 byte
alignment, I think we can retain that behavior and still avoid
unaligned access in the kernel with the following (admittedly ugly hack).
Can you please take a look? I tested it with 'ip x m' and a transport 
mode tunnel on my sparc.



--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Sowmini Varadhan Oct. 21, 2015, 1:11 p.m. UTC | #1
On (10/21/15 06:22), David Miller wrote:
> memcpy() _never_ works for avoiding unaligned accessed.
> 
> I repeat, no matter what you do, no matter what kinds of casts or
> fancy typing you use, memcpy() _never_ works for this purpose.
  :
> There is one and only one portable way to access unaligned data,
> and that is with the get_unaligned() and put_unaligned() helpers.

ok. I'll fix it up to use the *_unaligned functions and resend this 
out later today.

--Sowmini

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Oct. 21, 2015, 1:22 p.m. UTC | #2
From: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Date: Wed, 21 Oct 2015 08:36:28 -0400

> +               memcpy((u8 *)p, &tmp, sizeof(tmp));

memcpy() _never_ works for avoiding unaligned accessed.

I repeat, no matter what you do, no matter what kinds of casts or
fancy typing you use, memcpy() _never_ works for this purpose.

The compiler knows that the pointer you are using is supposed to be at
least 8 byte aligned, it can look through the cast and that's
completely legitimate for it to do.

So it can legitimately inline emit loads and stores to implement the
memcpy() and those will still get the unaligned accesses.

There is one and only one portable way to access unaligned data,
and that is with the get_unaligned() and put_unaligned() helpers.

Userland must do something similar to deal with this situation
as well.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
index 158ef4a..ca4e7f0 100644
--- a/net/xfrm/xfrm_user.c
+++ b/net/xfrm/xfrm_user.c
@@ -2620,7 +2620,7 @@  static inline size_t xfrm_sa_len(struct xfrm_state *x)
 static int xfrm_notify_sa(struct xfrm_state *x, const struct km_event *c)
 {
        struct net *net = xs_net(x);
-       struct xfrm_usersa_info *p;
+       struct xfrm_usersa_info *p, tmp;
        struct xfrm_usersa_id *id;
        struct nlmsghdr *nlh;
        struct sk_buff *skb;
@@ -2659,11 +2659,16 @@  static int xfrm_notify_sa(struct xfrm_state *x, const struct km_event *c)
                if (attr == NULL)
                        goto out_free_skb;
 
-               p = PTR_ALIGN(nla_data(attr), __alignof__(*p));
+               p = nla_data(attr);
+               err = copy_to_user_state_extra(x, &tmp, skb);
+               if (err)
+                       goto out_free_skb;
+               memcpy((u8 *)p, &tmp, sizeof(tmp));
+       } else {
+               err = copy_to_user_state_extra(x, p, skb);
+               if (err)
+                       goto out_free_skb;
        }
-       err = copy_to_user_state_extra(x, p, skb);
-       if (err)
-               goto out_free_skb;
 
        nlmsg_end(skb, nlh);