From patchwork Thu Jul 8 15:26:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 12365407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0AC9C07E96 for ; Thu, 8 Jul 2021 15:44:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 621C96142B for ; Thu, 8 Jul 2021 15:44:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 621C96142B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D812C6B006C; Thu, 8 Jul 2021 11:44:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D31936B0070; Thu, 8 Jul 2021 11:44:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFA196B0071; Thu, 8 Jul 2021 11:44:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 9CB586B006C for ; Thu, 8 Jul 2021 11:44:31 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B94612A002 for ; Thu, 8 Jul 2021 15:44:30 +0000 (UTC) X-FDA: 78339842700.05.5DE1F2D Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf29.hostedemail.com (Postfix) with ESMTP id 850FD9000257 for ; Thu, 8 Jul 2021 15:44:28 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 9A0036142B; Thu, 8 Jul 2021 15:26:17 +0000 (UTC) Subject: [PATCH v3 0/3] Bulk-release pages during NFSD read splice From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-mm@kvack.org Cc: neilb@suse.de Date: Thu, 08 Jul 2021 11:26:16 -0400 Message-ID: <162575623717.2532.8517369487503961860.stgit@klimt.1015granger.net> User-Agent: StGit/1.1 MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of "SRS0=3XBy=MA=oracle.com=chuck.lever@kernel.org" designates 198.145.29.99 as permitted sender) smtp.mailfrom="SRS0=3XBy=MA=oracle.com=chuck.lever@kernel.org"; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=oracle.com (policy=none) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 850FD9000257 X-Rspam-User: nil X-Stat-Signature: mbqpogxr5c9wyny5wf4yif43kmrgck98 X-HE-Tag: 1625759068-889546 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: I'm using "v3" simply because the v2 series of NFSD page allocator work included the same bulk-release concept in a different form. v2 has now been merged (thanks, Mel!). However, the bulk-release part of that series was postponed. Consider v3 to be an RFC refresh. As with the page allocation side, I'm trying to reduce the average number of times NFSD invokes the page allocation and release APIs because they can be expensive, and because it is a resource that is shared amongst all nfsd threads and thus access to it is partially serialized. This small series tackles a code path that is frequently invoked when NFSD handles READ operations on local filesystems that support splice (i.e., most of the popular ones). The previous version of this proposal placed the unused pages on a local list and then re-used the pages directly in svc_alloc_arg() before invoking alloc_pages_bulk_array() to fill in any remaining missing rq_pages entries. This meant there would be the possibility of some workloads that caused accrual of pages without bounds, so the finished version of that logic would have to be complex and possibly involve a shrinker. In this version, I'm simply handing the pages back to the page allocator, so all that complexity vanishes. What makes it more efficient is that instead of calling put_page() for each page, the code collects the unused pages in a per-nfsd thread array, and returns them to the allocator using a bulk free API (release_pages) when the array is full. In this version of the series, each nfsd thread never accrues more than 16 pages. We can easily make that larger or smaller, but 16 already reduces the rate of put_pages() calls to a minute fraction of what it was, and does not consume much additional space in struct svc_rqst. Comments welcome! Reviewed-by: NeilBrown --- Chuck Lever (3): NFSD: Clean up splice actor SUNRPC: Add svc_rqst_replace_page() API NFSD: Batch release pages during splice read fs/nfsd/vfs.c | 20 +++++--------------- include/linux/sunrpc/svc.h | 5 +++++ net/sunrpc/svc.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 39 insertions(+), 15 deletions(-) -- Chuck Lever