From patchwork Wed May 24 15:33:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13254197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCEADC77B7C for ; Wed, 24 May 2023 15:33:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 550B5900010; Wed, 24 May 2023 11:33:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B1D9900007; Wed, 24 May 2023 11:33:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3035D900010; Wed, 24 May 2023 11:33:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 20B98900007 for ; Wed, 24 May 2023 11:33:33 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CD49E40A34 for ; Wed, 24 May 2023 15:33:32 +0000 (UTC) X-FDA: 80825543064.30.F09BB5A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 0E5D020002 for ; Wed, 24 May 2023 15:33:30 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="gFFk6/oh"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684942411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uN3mSK+PjIP6JAbZhfDAmtnx0lnfpAA82MAUedDHiDA=; b=xuRY+qKt3hnXQLl/wR42Sw+ZUGjt+CIQXsiC2Q7rlrH6h1IuQDBjJvHzD5edBwXyiXek47 YIUWlA6eNW8K/05Q724ViQPvwVWt9HT/W7JnETmViIqvY7mnMRmYaFQJ0Yas/xcq7mGMdh 8SVlQhVOBSfG3DSrtlVrRLDpWUaYvM4= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="gFFk6/oh"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684942411; a=rsa-sha256; cv=none; b=tGCdGuAwbcTJyoYm1i6qpNZZsM7ryg6rkzWjSQLjieuUMbN/4nBzfmHEgF5HQa6gdLza5A qos52DKxD/VI4dZDG7lNC9+7mZ6d6dSkcjSYdCOd9xE8wk95gJcT2/RrTgRBZ0512MUtZ+ HRtbjy6pfJd/v+zq/XXmHjaW+tCUR/g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684942410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uN3mSK+PjIP6JAbZhfDAmtnx0lnfpAA82MAUedDHiDA=; b=gFFk6/ohsacLgGotYbnQBdO6LzZoL1AKEVRC0XAmzAU1oNrU4WFmeaVsDaFkoKjgT+6cfU H9AgQrIDR7dBfQrGhkWt3HCsXPN/GQnbZ6P4bmcju3B6Xd2I0xz39aj/pmcyOfzjbdah3J QcAg6moXIA/W+o1pDbClFzAP6sF7p/4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-Arb27wHhPVquoWD2zvAqmw-1; Wed, 24 May 2023 11:33:28 -0400 X-MC-Unique: Arb27wHhPVquoWD2zvAqmw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EB53C3C0D185; Wed, 24 May 2023 15:33:26 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 12905492B0B; Wed, 24 May 2023 15:33:22 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jeroen de Borst , Catherine Sullivan , Shailend Chand , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-nvme@lists.infradead.org Subject: [PATCH net-next 03/12] mm: Make the page_frag_cache allocator alignment param a pow-of-2 Date: Wed, 24 May 2023 16:33:02 +0100 Message-Id: <20230524153311.3625329-4-dhowells@redhat.com> In-Reply-To: <20230524153311.3625329-1-dhowells@redhat.com> References: <20230524153311.3625329-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0E5D020002 X-Stat-Signature: eyt8ua1fa54atw9cq7pcjbqc4tayi9wt X-Rspam-User: X-HE-Tag: 1684942410-509721 X-HE-Meta: U2FsdGVkX18Ad7xOCzsu7M+xLBi9kW8Q+OuDztQH8YYA7A/wKy0+Y5/2qJCsA2xm3BpzkDWwDODTYA4xEkGFqr3A7Y7PL/VUlXLMjYoG0U/kFH5nnS2x8JxETP3yE2dGzPSd4fZfkgsioXyJVFZPV4LQSRLX2I0ndocwXiyqM8OiLWO/pb0clV5JccBLFX0c/wCzsXZuRuk6Mkje0Nx8zA4Mhrw3cnLtgQkXE72rTblL8LYwjCEWpflNTyRmUDQ7cSf/uU0y61h7ripMwkfUu9vfBcEQ4eE9n+F0mGm9ada7hqOwU1ygh1VZIe4mpNIUROJ1y9iJFGQhNTnvqDWwoQHMN39yZa+aIYLK9jekUeyMIK4AzeIl4eB7dsUK7JeAesEjeswtvEsrE5ZDw3ZDbNk+NU1mbiHN2jVPNTWx6XuPZWeoOeCtOLkKZ5uAhY1icjkEVqrvbQREt44D/nMSNXGaXeP5KXg/+6Y63O0DD/ZoU5QOQ6/QTA6n9R8KvHalq+PkELxKQAmpv4QPqt4XiOIV9nhPSGbjCWPWAfLPVsg1Nist8atENpbmOUnnDElKKVbwQHHzZA9uJac7ko8zC/6w04+TN9OpWLAzUq8iqY0lFyIpeed8BFDbBHeyIrYIB1MJxQ6ROg7GdbsC97GbHhfX+f42vIFqwIb5ePQ8V5aIQT51Sa6pwA5+f7d/KR2p069hjSn17omcKkzgd4Ifyu93zM34sOTTE9Wl/jPVde9wucvKqjAvZLqsA5u4G3R+z8i2ytFd9IQiXjLTTHieMPWtJB2JfNp4SDPKhqzpKj4bq3sHtTlVL8vQO97jqEzQ6P/KNb+ZJYQ12m4vyAkWnnQIw2TARkRwMq68V9ChdpwpAVQI0jPQxB1bEECr81KpILbEWG3CIWxWgGh6nCcuxfy6ukI4avoNKwykfdVs3Wfxg3s2ae8sWiDqAtuOrO3ztyJIlmqEiHXVVGr3GSd cOVjLa6m DVuSzpm/Fp/b7IPOJU6A2RIcsvSdmw2QbK/0pyMB4mTG1aRNqEgee4/OeliMMO6TOcqLbouHuXlAsKJvPpz4meGMO7P8L/yuvSkCwgC0XBn86QF+kMEIYJkas+AIqxC26oCy4f6GsBREm+bWJ0qO1F/pd8spdqcJgoK9XtrhwWVfgfrwHt0fx2L+lGo8qvPTfMpUsUpATNmW4RKVhXACE1EEcWrr+px4y4KUx2T0MSOMI7ovTA06P8v2HuAa/YaNkp6jyuMYYXtZGJ2qbSyCWNp57l5GxIUe5radm5yt5BWqvlCfo9Gmbf27GzX38aicj2J97MMO3xM/L7MjauOzFLETTNiI2mLH33KKwykPYIqnwHMSPO1pjOeq5kjL2jFb1mAplsXqzs86T9JBUuzllWN4E3tofkHcN85WqtH84CspNE/WljMlBeKl8vvwpk3GcUF/o15n6cdeIaNhsSp8sLLqinZdwJpGHRsertRWGjUQgnwTovUByWzLz8lgD18ScVvdnZZLx0q/+anIaRZCmhfOoMhkf1YgeXtHibSZXAdcWbr0b7JgMJI1xZX+hV5SoBKO5RB4OmrlrlNcpcWbFbVOSxHVPyKxGHzS/zTwccH4flRKzMsS9KLVKruFx0vt9f6OaQHxQsVL/UfETrrWeICGQSLJqF5q9a76oY+nzm3ty/Noaq88HnEwtBQec37bLNSSonPqzNroXsefR4i553aqBVNssi5nVEzlNSl8stndkNv+QXgNkvQssHdmMmCxLtIw0+X9lasY5slgING8HHcraAtY33yau4uMQOR2aV0s2Y41WUR7MnZyFJ3w0Vg0T9uY12CbqzolG5FCIjAoIRMjzucBknXpX/U8Xnv2PnxIwjg2151zg7FyPhSbRvo21nKwb0NZ5Xbx9T5w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make the page_frag_cache allocator's alignment parameter a power of 2 rather than a mask and give a warning if it isn't. This means that it's consistent with {napi,netdec}_alloc_frag_align() and allows __{napi,netdev}_alloc_frag_align() to be removed. Signed-off-by: David Howells cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Jeroen de Borst cc: Catherine Sullivan cc: Shailend Chand cc: Felix Fietkau cc: John Crispin cc: Sean Wang cc: Mark Lee cc: Lorenzo Bianconi cc: Matthias Brugger cc: AngeloGioacchino Del Regno cc: Keith Busch cc: Jens Axboe cc: Christoph Hellwig cc: Sagi Grimberg cc: Chaitanya Kulkarni cc: Andrew Morton cc: Matthew Wilcox cc: netdev@vger.kernel.org cc: linux-arm-kernel@lists.infradead.org cc: linux-mediatek@lists.infradead.org cc: linux-nvme@lists.infradead.org cc: linux-mm@kvack.org --- include/linux/gfp.h | 4 ++-- include/linux/skbuff.h | 22 ++++------------------ mm/page_frag_alloc.c | 8 +++++--- net/core/skbuff.c | 14 +++++++------- 4 files changed, 18 insertions(+), 30 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 03504beb51e4..fa30100f46ad 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -306,12 +306,12 @@ struct page_frag_cache; extern void __page_frag_cache_drain(struct page *page, unsigned int count); extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask); + unsigned int align); static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return page_frag_alloc_align(nc, fragsz, gfp_mask, 1); } void page_frag_cache_clear(struct page_frag_cache *nc); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1b2ebf6113e0..41b63e72c6c3 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3158,7 +3158,7 @@ void skb_queue_purge(struct sk_buff_head *list); unsigned int skb_rbtree_purge(struct rb_root *root); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align); /** * netdev_alloc_frag - allocate a page fragment @@ -3169,14 +3169,7 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); */ static inline void *netdev_alloc_frag(unsigned int fragsz) { - return __netdev_alloc_frag_align(fragsz, ~0u); -} - -static inline void *netdev_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __netdev_alloc_frag_align(fragsz, -align); + return netdev_alloc_frag_align(fragsz, 1); } struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, @@ -3236,18 +3229,11 @@ static inline void skb_free_frag(void *addr) page_frag_free(addr); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align); static inline void *napi_alloc_frag(unsigned int fragsz) { - return __napi_alloc_frag_align(fragsz, ~0u); -} - -static inline void *napi_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __napi_alloc_frag_align(fragsz, -align); + return napi_alloc_frag_align(fragsz, 1); } struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index e02b81d68dc4..9d3f6fbd9a07 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -64,13 +64,15 @@ void page_frag_cache_clear(struct page_frag_cache *nc) EXPORT_SYMBOL(page_frag_cache_clear); void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) { unsigned int size = PAGE_SIZE; struct page *page; int offset; + WARN_ON_ONCE(!is_power_of_2(align)); + if (unlikely(!nc->va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); @@ -129,7 +131,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; + offset &= ~(align - 1); nc->offset = offset; return nc->va + offset; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index f4a5b51aed22..cc507433b357 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -289,17 +289,17 @@ void napi_get_frags_check(struct napi_struct *napi) local_bh_enable(); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); } -EXPORT_SYMBOL(__napi_alloc_frag_align); +EXPORT_SYMBOL(napi_alloc_frag_align); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { void *data; @@ -307,18 +307,18 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); - data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); local_bh_enable(); } return data; } -EXPORT_SYMBOL(__netdev_alloc_frag_align); +EXPORT_SYMBOL(netdev_alloc_frag_align); static struct sk_buff *napi_skb_cache_get(void) {