From patchwork Sat Jul 10 07:43:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 12368381 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B75AEC07E95 for ; Sat, 10 Jul 2021 07:44:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D4FE613EC for ; Sat, 10 Jul 2021 07:44:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232218AbhGJHqy (ORCPT ); Sat, 10 Jul 2021 03:46:54 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6798 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232004AbhGJHqs (ORCPT ); Sat, 10 Jul 2021 03:46:48 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GMMMm0x3dzXrLh; Sat, 10 Jul 2021 15:38:28 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 10 Jul 2021 15:44:00 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 10 Jul 2021 15:44:00 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH rfc v2 2/5] page_pool: add interface for getting and setting pagecnt_bias Date: Sat, 10 Jul 2021 15:43:19 +0800 Message-ID: <1625903002-31619-3-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1625903002-31619-1-git-send-email-linyunsheng@huawei.com> References: <1625903002-31619-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC As suggested by Alexander, "A DMA mapping should be page aligned anyway so the lower 12 bits would be reserved 0", so it might make more sense to repurpose the lower 12 bits of the dma address to store the pagecnt_bias for elevated refcnt case in page pool. As newly added page_pool_get_pagecnt_bias() may be called outside of the softirq context, so annotate the access to page->dma_addr[0] with READ_ONCE() and WRITE_ONCE(). Other three interfaces using page->dma_addr[0] is only called in the softirq context during normal rx processing, hopefully the barrier in the rx processing will ensure the correct order between getting and setting pagecnt_bias. Signed-off-by: Yunsheng Lin --- include/net/page_pool.h | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8d7744d..5746f17 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -200,7 +200,7 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, static inline dma_addr_t page_pool_get_dma_addr(struct page *page) { - dma_addr_t ret = page->dma_addr[0]; + dma_addr_t ret = READ_ONCE(page->dma_addr[0]) & PAGE_MASK; if (sizeof(dma_addr_t) > sizeof(unsigned long)) ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; return ret; @@ -208,11 +208,31 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) { - page->dma_addr[0] = addr; + unsigned long dma_addr_0 = READ_ONCE(page->dma_addr[0]); + + dma_addr_0 &= ~PAGE_MASK; + dma_addr_0 |= (addr & PAGE_MASK); + WRITE_ONCE(page->dma_addr[0], dma_addr_0); + if (sizeof(dma_addr_t) > sizeof(unsigned long)) page->dma_addr[1] = upper_32_bits(addr); } +static inline int page_pool_get_pagecnt_bias(struct page *page) +{ + return (READ_ONCE(page->dma_addr[0]) & ~PAGE_MASK); +} + +static inline void page_pool_set_pagecnt_bias(struct page *page, int bias) +{ + unsigned long dma_addr_0 = READ_ONCE(page->dma_addr[0]); + + dma_addr_0 &= PAGE_MASK; + dma_addr_0 |= (bias & ~PAGE_MASK); + + WRITE_ONCE(page->dma_addr[0], dma_addr_0); +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL