From patchwork Fri Mar 12 21:57:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 12136125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0357C433E0 for ; Fri, 12 Mar 2021 22:09:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 51A3064F2D for ; Fri, 12 Mar 2021 22:09:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 51A3064F2D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E9726B006C; Fri, 12 Mar 2021 17:09:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C07D6B006E; Fri, 12 Mar 2021 17:09:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AE806B0070; Fri, 12 Mar 2021 17:09:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id 2D6A06B006C for ; Fri, 12 Mar 2021 17:09:44 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B9988824805A for ; Fri, 12 Mar 2021 22:09:43 +0000 (UTC) X-FDA: 77912615046.04.E42625F Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf08.hostedemail.com (Postfix) with ESMTP id 395C383D67F6 for ; Fri, 12 Mar 2021 21:57:06 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 906DF64DAF; Fri, 12 Mar 2021 21:57:12 +0000 (UTC) Subject: [PATCH] SUNRPC: Refresh rq_pages using a bulk page allocator From: Chuck Lever To: mgorman@techsingularity.net Cc: akpm@linux-foundation.org, brouer@redhat.com, hch@infradead.org, alexander.duyck@gmail.com, willy@infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org Date: Fri, 12 Mar 2021 16:57:11 -0500 Message-ID: <161558613209.1366.1492710238067504151.stgit@klimt.1015granger.net> User-Agent: StGit/1.0-5-g755c MIME-Version: 1.0 X-Stat-Signature: xydtmab37tpyfrpgiyg1jkb7b31io64p X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 395C383D67F6 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf08; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: none/none X-HE-Tag: 1615586226-169315 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reduce the rate at which nfsd threads hammer on the page allocator. This improves throughput scalability by enabling the threads to run more independently of each other. Signed-off-by: Chuck Lever Reviewed-by: Alexander Duyck --- Hi Mel- This patch replaces patch 5/7 in v4 of your alloc_pages_bulk() series. It implements code clean-ups suggested by Alexander Duyck. It builds and has seen some light testing. net/sunrpc/svc_xprt.c | 39 +++++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 4d58424db009..791ea24159b1 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -661,11 +661,13 @@ static void svc_check_conn_limits(struct svc_serv *serv) static int svc_alloc_arg(struct svc_rqst *rqstp) { struct svc_serv *serv = rqstp->rq_server; + unsigned long needed; struct xdr_buf *arg; + struct page *page; + LIST_HEAD(list); int pages; int i; - /* now allocate needed pages. If we get a failure, sleep briefly */ pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT; if (pages > RPCSVC_MAXPAGES) { pr_warn_once("svc: warning: pages=%u > RPCSVC_MAXPAGES=%lu\n", @@ -673,19 +675,32 @@ static int svc_alloc_arg(struct svc_rqst *rqstp) /* use as many pages as possible */ pages = RPCSVC_MAXPAGES; } - for (i = 0; i < pages ; i++) - while (rqstp->rq_pages[i] == NULL) { - struct page *p = alloc_page(GFP_KERNEL); - if (!p) { - set_current_state(TASK_INTERRUPTIBLE); - if (signalled() || kthread_should_stop()) { - set_current_state(TASK_RUNNING); - return -EINTR; - } - schedule_timeout(msecs_to_jiffies(500)); + + for (needed = 0, i = 0; i < pages ; i++) { + if (!rqstp->rq_pages[i]) + needed++; + } + i = 0; + while (needed) { + needed -= alloc_pages_bulk(GFP_KERNEL, 0, needed, &list); + for (; i < pages; i++) { + if (rqstp->rq_pages[i]) + continue; + page = list_first_entry_or_null(&list, struct page, lru); + if (likely(page)) { + list_del(&page->lru); + rqstp->rq_pages[i] = page; + continue; } - rqstp->rq_pages[i] = p; + set_current_state(TASK_INTERRUPTIBLE); + if (signalled() || kthread_should_stop()) { + set_current_state(TASK_RUNNING); + return -EINTR; + } + schedule_timeout(msecs_to_jiffies(500)); + break; } + } rqstp->rq_page_end = &rqstp->rq_pages[pages]; rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */