From patchwork Thu Jun 29 13:25:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olga Kornievskaia X-Patchwork-Id: 9816935 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3C5C76020A for ; Thu, 29 Jun 2017 13:25:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 300F628651 for ; Thu, 29 Jun 2017 13:25:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2485F28725; Thu, 29 Jun 2017 13:25:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 781DC28651 for ; Thu, 29 Jun 2017 13:25:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752880AbdF2NZi (ORCPT ); Thu, 29 Jun 2017 09:25:38 -0400 Received: from mail-qt0-f193.google.com ([209.85.216.193]:33680 "EHLO mail-qt0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752768AbdF2NZh (ORCPT ); Thu, 29 Jun 2017 09:25:37 -0400 Received: by mail-qt0-f193.google.com with SMTP id c20so11236681qte.0 for ; Thu, 29 Jun 2017 06:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:from:date:message-id:subject:to; bh=y+ynuoqNjdZ0bfksTj0pFJnrSLIsGHph4rNM3UaI94A=; b=cDgrn9hU8jQGzDquXqgmK/XPhOrMmNMxxOoOaXEop8AIDibYkdOaE0Bkguor8dtgoz SARJjFJ4sJIshCLenf5OOaU0jC0gW5GdCZWNe/5QEHoYe9k99OhOwLNk398QKC7aVkyj MOc0Mph+nGP5+Q8Q6pcvdUmILYMQWUiDX5MwUo8EqsCBRqxRw+EJtMbMcKFq29nMZS0t AubzfhtXyDy9tdgSg/jBHWFHVDgAkxr+8j3ZHiQutWY0lYMWOrcwUHfBmU/0X5TTIVEK 1I+aE5JOpxTusrVJg44Q4JhbblQvSyYvObef25sF2vqqZPkMg/6FpwNnAtvRMWqBKh2u 9rlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:from:date:message-id:subject :to; bh=y+ynuoqNjdZ0bfksTj0pFJnrSLIsGHph4rNM3UaI94A=; b=EdeBg7aFRJhXiV05IRAG56biVneZXeKNRuHiiYe8CrQcOs7P2YD/pvb8I81PQSi+7y vQhO4P3U9ruRueUtjGmoneu2TmSVGC377G6NMpBhIjbtEkAd04XZbZa6ttMHxHQBGxaL 7/s4TqcMYbQlIILHGIwnKDR9a0Li20OGEsuHaF3CKRUnh7x3NYeZ/VHgaPl8UwTIbcp8 o3JAltRoOuW/Ou/eRmj/38XG0DCReCgdFZ+P/9SU0GQbVJC5qAxTFMNgoqpmKL9pqUNb JFg//cqLSIikBZ37klUjCfFQ0c/GpOMgNwTo/KcKsD+mnOCp+hmagnlA8dplEEACrNSW 3JvA== X-Gm-Message-State: AKS2vOxtS/hNz4oRglLt1fO8oPT6JKVXqDFhnR9TLOrNK1asxFsi4tR8 6iXhLjb2e/3f5C1gAI7TUeLRJzTXDoEo X-Received: by 10.200.53.78 with SMTP id z14mr18889933qtb.184.1498742736648; Thu, 29 Jun 2017 06:25:36 -0700 (PDT) MIME-Version: 1.0 Received: by 10.12.176.103 with HTTP; Thu, 29 Jun 2017 06:25:36 -0700 (PDT) From: Olga Kornievskaia Date: Thu, 29 Jun 2017 09:25:36 -0400 X-Google-Sender-Auth: nFxjiBb-29okkWqLe6hwhwSQbrY Message-ID: Subject: [RFC] fix parallelism for rpc tasks To: linux-nfs Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi folks, On a multi-core machine, is it expected that we can have parallel RPCs handled by each of the per-core workqueue? In testing a read workload, observing via "top" command that a single "kworker" thread is running servicing the requests (no parallelism). It's more prominent while doing these operations over krb5p mount. What has been suggested by Bruce is to try this and in my testing I see then the read workload spread among all the kworker threads. Signed-off-by: Olga Kornievskaia --- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c index 0cc8383..f80e688 100644 --- a/net/sunrpc/sched.c +++ b/net/sunrpc/sched.c @@ -1095,7 +1095,7 @@ static int rpciod_start(void) * Create the rpciod thread and wait for it to start. */ dprintk("RPC: creating workqueue rpciod\n"); - wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 0); + wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_UNBOUND, 0); if (!wq) goto out_failed; rpciod_workqueue = wq;