From patchwork Thu Oct 4 17:36:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Hutchings X-Patchwork-Id: 10626449 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0AA4614BD for ; Thu, 4 Oct 2018 17:37:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EADDF292BA for ; Thu, 4 Oct 2018 17:37:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DF02929553; Thu, 4 Oct 2018 17:37:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 633802954F for ; Thu, 4 Oct 2018 17:37:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Subject:To:From :Date:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=xjrI0e/jYpB91p9UI1d5xqg2PR9UwzqSeF5V1JbfL9I=; b=hAHRwZN9qvzBj3 oyw9JjleAYUuNGAa9izLNvsu7Djt9KRNo3wmGMvG5Ttg1AWWBTewSUaRu7aYmTYgEWTvDf+PHL8nm T71JUnN5GklaEPqv4G0LjxQ9TE+6lPcCDYxg2MdXis1dbyex2pGmyqWslhK+dkXVnqUjXAw94fCAW x8tC22h01lJe9FDZoDcOKxA/vFtoL/NMZg3isAomN7o+mEPLxNIkzMIg38TlgingsiB7v/3F2Mzk5 c6BkmVHHR0KG+ksaxRag4f2pZn/DjUQJml+SyhHB1OwzEsrWfSxxG6sOPBtTRzXuuAt2ySb/PfxbO geLsj+xngAZ1zMSMCLMQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1g87Yj-0001Uo-PZ; Thu, 04 Oct 2018 17:37:09 +0000 Received: from imap1.codethink.co.uk ([176.9.8.82]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1g87YT-0001QL-15 for linux-arm-kernel@lists.infradead.org; Thu, 04 Oct 2018 17:37:06 +0000 Received: from [148.252.241.226] (helo=xylophone.i.decadent.org.uk) by imap1.codethink.co.uk with esmtpsa (Exim 4.84_2 #1 (Debian)) id 1g87YC-0001zP-5k; Thu, 04 Oct 2018 18:36:36 +0100 Date: Thu, 4 Oct 2018 18:36:31 +0100 From: Ben Hutchings To: netdev@vger.kernel.org Subject: [RFC PATCH] skb: Define NET_IP_ALIGN based on CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS Message-ID: <20181004173631.3nchegr6rm3jgz24@xylophone.i.decadent.org.uk> MIME-Version: 1.0 Content-Disposition: inline User-Agent: NeoMutt/20170113 (1.7.2) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181004_103653_259650_6DC157C8 X-CRM114-Status: GOOD ( 11.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@lists.codethink.co.uk, linux-s390@vger.kernel.org, Ben Dooks , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP NET_IP_ALIGN is supposed to be defined as 0 if DMA writes to an unaligned buffer would be more expensive than CPU access to unaligned header fields, and otherwise defined as 2. Currently only ppc64 and x86 configurations define it to be 0. However several other architectures (conditionally) define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS, which seems to imply that NET_IP_ALIGN should be 0. Remove the overriding definitions for ppc64 and x86 and define NET_IP_ALIGN solely based on CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS. Signed-off-by: Ben Hutchings --- arch/powerpc/include/asm/processor.h | 11 ----------- arch/x86/include/asm/processor.h | 8 -------- include/linux/skbuff.h | 7 +++---- 3 files changed, 3 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index 52fadded5c1e..65c8210d2787 100644 --- a/arch/powerpc/include/asm/processor.h +++ b/arch/powerpc/include/asm/processor.h @@ -525,17 +525,6 @@ extern void cvt_fd(float *from, double *to); extern void cvt_df(double *from, float *to); extern void _nmask_and_or_msr(unsigned long nmask, unsigned long or_val); -#ifdef CONFIG_PPC64 -/* - * We handle most unaligned accesses in hardware. On the other hand - * unaligned DMA can be very expensive on some ppc64 IO chips (it does - * powers of 2 writes until it reaches sufficient alignment). - * - * Based on this we disable the IP header alignment in network drivers. - */ -#define NET_IP_ALIGN 0 -#endif - #endif /* __KERNEL__ */ #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_PROCESSOR_H */ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index d53c54b842da..0108efc9726e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -33,14 +33,6 @@ struct vm86; #include #include -/* - * We handle most unaligned accesses in hardware. On the other hand - * unaligned DMA can be quite expensive on some Nehalem processors. - * - * Based on this we disable the IP header alignment in network drivers. - */ -#define NET_IP_ALIGN 0 - #define HBP_NUM 4 /* * Default implementation of macro that returns current diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 17a13e4785fc..42467be8021f 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -2435,11 +2435,10 @@ static inline int pskb_network_may_pull(struct sk_buff *skb, unsigned int len) * The downside to this alignment of the IP header is that the DMA is now * unaligned. On some architectures the cost of an unaligned DMA is high * and this cost outweighs the gains made by aligning the IP header. - * - * Since this trade off varies between architectures, we allow NET_IP_ALIGN - * to be overridden. */ -#ifndef NET_IP_ALIGN +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS +#define NET_IP_ALIGN 0 +#else #define NET_IP_ALIGN 2 #endif