From patchwork Tue Oct 1 07:58:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13817623 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C06B81C1AA9; Tue, 1 Oct 2024 08:00:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727769606; cv=none; b=XiRdjlaRXVmJ5yCNc+uI+1qqFvsG9Yl7ejPiYx4AL3kQeCwW3bBFWKR7ogVWktyDobI6UPEL/xJrzKjovQoEbSCLMraXJyUnqjg9s0hJT3/+dbaUQN0gHngt06iikJ8NXyThNuI4qFSpSkC6uV0hnXIlvn1vR+EsElIG/BOOwUw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727769606; c=relaxed/simple; bh=WxENT/rypTbGvbnukSnyKm7DMpsKLUhPbsiqrGsTSjo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tIpyvLZvjWnEhfeIuCZezChbiA0BnURsFE2gCLryJTu41PWPtjCe/CkTL7Y5ejlh9dKc5LDD9cu/deehOF5ET4OFNrT672JPNbvYyfEP2JHP0tWnVOjOC87pqpEXBAdf/mNq414UiNvdQHCp2i+d3aUfZqcpcc4Nxmhj7MS0mA8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=XQrhc1jx; arc=none smtp.client-ip=209.85.216.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XQrhc1jx" Received: by mail-pj1-f68.google.com with SMTP id 98e67ed59e1d1-2e0b0142bbfso3267678a91.1; Tue, 01 Oct 2024 01:00:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769604; x=1728374404; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r1RucxwqQRpOPX6vOYgsd3mSR4ltbKTf5O9GaoA3/H0=; b=XQrhc1jxpCwoixZ9B9rGK+f5K+6zbK06f32K1+TRKNy+pINY8LeHn+ePo/JcY+XeeY e2OVNXpJhT2Mtc/KeWcoqNi0kY/jJSJ6/ZDDGMALQimzPbXKqAThB6+OlM6xsU6H8IPT IpXiDtBOo5bwRMMLIz4ysAMDYKcKPlnDWDo4oPrFJFkFJgZ7Kwr8D3LSB/TV+g6lrZsY 9Agu/eW6nY/oeXweJcG9xioxIw75bLfv1cBi7uz9ejqVb4szmT8rMto1NZuNGjzsnsW0 wjF3jcMtwlS0Ql8I7aiyoWfY/wntK6MCMIosjmCL6JtGYEsGFZfKKjiF8Y7NE0JfeUqd cKXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769604; x=1728374404; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r1RucxwqQRpOPX6vOYgsd3mSR4ltbKTf5O9GaoA3/H0=; b=PqbmnihdDIwqBhxAD4DmcFQ4Ea4gfPdUz+XMhXAqX2F5JhHW7f6q9VtXvhdKCkbPOB Uy3xOXnm/fewUoKL4wWLAqJhC+ku2nMmcKftYJYjqn1rx6O0JtJusnffbMrI3Fhjwb4e qMNixZrmHSyjPLP5TLnxHl6QoRIkH7HYsuo8qd75SvUzThCs3iW6U8NjY3uVM0S7GqGV BHsDS0j7H2P95QxobxrJCj1BPMdPt9+jpz/QB16t+PyRYMh3OrkxTwfYGkU3VeRVAsz+ NQewFCq8IIHoXNss5tzCVN/HIC9ug8UFP4e+HsnJ6qe2u/h1kCQUOAWsygScj8VcV4Em aG6A== X-Forwarded-Encrypted: i=1; AJvYcCUAZse+lLlAbTk0rdrtNSgzIkVMnwDOtlLoi94rdDjQ/2G1lrNIBgOIXCm1XKE5k2lBmLD5zQEzAPTCpYo=@vger.kernel.org X-Gm-Message-State: AOJu0YziD9cokzwHTNDsF7xXEtBhSbauLRknUR12uQU3bOCnX5z7vPFa RgWObLhosMRl+s/N+nlEsn8d/o8/kn4zpSytu1PVgC8KWB5nBPpL X-Google-Smtp-Source: AGHT+IFPxcxt0MFdf7tMGPm+w589CsajkXhlQaj5n+1Gi7AVAld6EP944pYZHk1LqVuIZirkwu8ymQ== X-Received: by 2002:a17:90a:e7d1:b0:2e0:728d:fb3b with SMTP id 98e67ed59e1d1-2e15a3431afmr3591602a91.18.1727769604111; Tue, 01 Oct 2024 01:00:04 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.01.00.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 01:00:03 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Andrew Morton , linux-mm@kvack.org Subject: [PATCH net-next v19 10/14] mm: page_frag: introduce prepare/probe/commit API Date: Tue, 1 Oct 2024 15:58:53 +0800 Message-Id: <20241001075858.48936-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 135 ++++++++++++++++++++++++++++++++ mm/page_frag_cache.c | 21 +++++ 2 files changed, 156 insertions(+) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index b634e1338741..4e9018051956 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -40,6 +40,11 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return nc->offset; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, @@ -48,6 +53,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, struct page_frag *pfrag, unsigned int used_sz); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask); static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, struct page_frag *pfrag, @@ -90,6 +99,132 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +static inline bool __page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask))) + return false; + + __page_frag_cache_commit(nc, pfrag, fragsz); + return true; +} + +static inline bool page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline bool page_frag_refill(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask) +{ + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask); +} + +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + -align); +} + +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + ~0u); +} + +static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + +static inline void page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, used_sz); +} + +static inline void page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->offset); + + nc->pagecnt_bias++; + nc->offset -= fragsz; +} + void page_frag_free(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index a5448b44068a..c052c77a96eb 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -117,6 +117,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = page_frag_cache_page_size(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = page_frag_encoded_page_ptr(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return page_frag_encoded_page_address(encoded_page) + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align); + void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask, unsigned int align_mask)