From patchwork Tue Mar 12 17:22:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongli Zhang X-Patchwork-Id: 10849591 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B5D51669 for ; Tue, 12 Mar 2019 17:23:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4BAFF296EB for ; Tue, 12 Mar 2019 17:23:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3FD3C296F2; Tue, 12 Mar 2019 17:23:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB75C296EB for ; Tue, 12 Mar 2019 17:23:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730036AbfCLRWz (ORCPT ); Tue, 12 Mar 2019 13:22:55 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:48294 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726833AbfCLRWy (ORCPT ); Tue, 12 Mar 2019 13:22:54 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x2CHEY8U159031; Tue, 12 Mar 2019 17:22:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=mime-version : message-id : date : from : to : cc : subject : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=iOjG49OepcYwW0hFgqwiU+e6H/1osYxSanFbyDuWEf0=; b=cpR19JXd90mLmPVgD7ys+E3oaDO8XW940yzSZ4KqYPNTllwjq+6dPBSjEgth2r1ChE9/ disihy+TtFn/KMhCh+JK8mzfPp4c6mZFu8SZ9IrRCXsUCn17xAr5sg7Cnyf4CRdN1Hmf bd2lcu9g91buKXYCu/n7s8hl2/tG/5tyqtJBenequkQqKGU7DDZORRPvY8YNgqB15Wme XsnSudBR4NbBH53GtZUnanYgHaNmVDNMIoXmFiSaE9/sDAvpkZP3YHfCnvwzHFQaVp62 cQqi6i8q4gURRb5bjg7ptAviTUhHPO7CfkvcZNnrUP3K0fMb633vmY5VBpZb/3PTm4M8 fA== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp2130.oracle.com with ESMTP id 2r430epmm8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 12 Mar 2019 17:22:48 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id x2CHMlV5016709 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 12 Mar 2019 17:22:47 GMT Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x2CHMlDf018500; Tue, 12 Mar 2019 17:22:47 GMT MIME-Version: 1.0 Message-ID: Date: Tue, 12 Mar 2019 10:22:46 -0700 (PDT) From: Dongli Zhang To: , Cc: , , , Subject: virtio-blk: should num_vqs be limited by num_possible_cpus()? X-Mailer: Zimbra on Oracle Beehive Content-Disposition: inline X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9193 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1903120118 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP I observed that there is one msix vector for config and one shared vector for all queues in below qemu cmdline, when the num-queues for virtio-blk is more than the number of possible cpus: qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=6" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 ... ... 24: 0 0 0 0 PCI-MSI 65536-edge virtio0-config 25: 0 0 0 59 PCI-MSI 65537-edge virtio0-virtqueues ... ... However, when num-queues is the same as number of possible cpus: qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=4" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 ... ... 24: 0 0 0 0 PCI-MSI 65536-edge virtio0-config 25: 2 0 0 0 PCI-MSI 65537-edge virtio0-req.0 26: 0 35 0 0 PCI-MSI 65538-edge virtio0-req.1 27: 0 0 32 0 PCI-MSI 65539-edge virtio0-req.2 28: 0 0 0 0 PCI-MSI 65540-edge virtio0-req.3 ... ... In above case, there is one msix vector per queue. This is because the max number of queues is not limited by the number of possible cpus. By default, nvme (regardless about write_queues and poll_queues) and xen-blkfront limit the number of queues with num_possible_cpus(). Is this by design on purpose, or can we fix with below? --- PS: The same issue is applicable to virtio-scsi as well. Thank you very much! Dongli Zhang diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 4bc083b..df95ce3 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk) if (err) num_vqs = 1; + num_vqs = min(num_possible_cpus(), num_vqs); + vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL); if (!vblk->vqs) return -ENOMEM;